00:00:00.001 Started by upstream project "autotest-per-patch" build number 132749 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.175 Fetching changes from the remote Git repository 00:00:00.180 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.273 > git --version # 'git version 2.39.2' 00:00:00.273 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.775 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.787 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.798 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.798 > git config core.sparsecheckout # timeout=10 00:00:05.810 > git read-tree -mu HEAD # timeout=10 00:00:05.829 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.854 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.854 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.969 [Pipeline] Start of Pipeline 00:00:05.999 [Pipeline] library 00:00:06.001 Loading library shm_lib@master 00:00:06.001 Library shm_lib@master is cached. Copying from home. 00:00:06.016 [Pipeline] node 00:00:06.024 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.026 [Pipeline] { 00:00:06.034 [Pipeline] catchError 00:00:06.035 [Pipeline] { 00:00:06.045 [Pipeline] wrap 00:00:06.052 [Pipeline] { 00:00:06.058 [Pipeline] stage 00:00:06.060 [Pipeline] { (Prologue) 00:00:06.261 [Pipeline] sh 00:00:06.552 + logger -p user.info -t JENKINS-CI 00:00:06.570 [Pipeline] echo 00:00:06.572 Node: CYP9 00:00:06.578 [Pipeline] sh 00:00:06.897 [Pipeline] setCustomBuildProperty 00:00:06.907 [Pipeline] echo 00:00:06.909 Cleanup processes 00:00:06.914 [Pipeline] sh 00:00:07.229 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.229 1804156 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.245 [Pipeline] sh 00:00:07.533 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.533 ++ grep -v 'sudo pgrep' 00:00:07.533 ++ awk '{print $1}' 00:00:07.533 + sudo kill -9 00:00:07.533 + true 00:00:07.552 [Pipeline] cleanWs 00:00:07.564 [WS-CLEANUP] Deleting project workspace... 00:00:07.564 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.571 [WS-CLEANUP] done 00:00:07.575 [Pipeline] setCustomBuildProperty 00:00:07.590 [Pipeline] sh 00:00:07.901 + sudo git config --global --replace-all safe.directory '*' 00:00:07.998 [Pipeline] httpRequest 00:00:09.134 [Pipeline] echo 00:00:09.136 Sorcerer 10.211.164.101 is alive 00:00:09.145 [Pipeline] retry 00:00:09.147 [Pipeline] { 00:00:09.160 [Pipeline] httpRequest 00:00:09.165 HttpMethod: GET 00:00:09.165 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.166 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.188 Response Code: HTTP/1.1 200 OK 00:00:09.188 Success: Status code 200 is in the accepted range: 200,404 00:00:09.189 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.967 [Pipeline] } 00:00:14.984 [Pipeline] // retry 00:00:14.991 [Pipeline] sh 00:00:15.282 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.300 [Pipeline] httpRequest 00:00:15.700 [Pipeline] echo 00:00:15.702 Sorcerer 10.211.164.101 is alive 00:00:15.711 [Pipeline] retry 00:00:15.713 [Pipeline] { 00:00:15.727 [Pipeline] httpRequest 00:00:15.732 HttpMethod: GET 00:00:15.733 URL: http://10.211.164.101/packages/spdk_c2471e450077f9601e9f40f4449b1ee639f00498.tar.gz 00:00:15.735 Sending request to url: http://10.211.164.101/packages/spdk_c2471e450077f9601e9f40f4449b1ee639f00498.tar.gz 00:00:15.805 Response Code: HTTP/1.1 200 OK 00:00:15.805 Success: Status code 200 is in the accepted range: 200,404 00:00:15.806 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c2471e450077f9601e9f40f4449b1ee639f00498.tar.gz 00:01:45.024 [Pipeline] } 00:01:45.043 [Pipeline] // retry 00:01:45.051 [Pipeline] sh 00:01:45.343 + tar --no-same-owner -xf spdk_c2471e450077f9601e9f40f4449b1ee639f00498.tar.gz 00:01:48.664 [Pipeline] sh 00:01:48.955 + git -C spdk log --oneline -n5 00:01:48.955 c2471e450 nvmf: Clean unassociated_qpairs on connect error 00:01:48.955 5469bd2d1 nvmf/rdma: Fix destroy of uninitialized qpair 00:01:48.955 c7acbd6be test/iscsi_tgt: Remove support for the namespace arg 00:01:48.955 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:48.955 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:48.968 [Pipeline] } 00:01:48.983 [Pipeline] // stage 00:01:48.993 [Pipeline] stage 00:01:48.996 [Pipeline] { (Prepare) 00:01:49.014 [Pipeline] writeFile 00:01:49.032 [Pipeline] sh 00:01:49.323 + logger -p user.info -t JENKINS-CI 00:01:49.337 [Pipeline] sh 00:01:49.626 + logger -p user.info -t JENKINS-CI 00:01:49.639 [Pipeline] sh 00:01:49.929 + cat autorun-spdk.conf 00:01:49.929 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.929 SPDK_TEST_NVMF=1 00:01:49.929 SPDK_TEST_NVME_CLI=1 00:01:49.929 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.929 SPDK_TEST_NVMF_NICS=e810 00:01:49.929 SPDK_TEST_VFIOUSER=1 00:01:49.929 SPDK_RUN_UBSAN=1 00:01:49.929 NET_TYPE=phy 00:01:49.937 RUN_NIGHTLY=0 00:01:49.942 [Pipeline] readFile 00:01:49.968 [Pipeline] withEnv 00:01:49.970 [Pipeline] { 00:01:49.983 [Pipeline] sh 00:01:50.275 + set -ex 00:01:50.275 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:50.275 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:50.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:50.275 ++ SPDK_TEST_NVMF=1 00:01:50.275 ++ SPDK_TEST_NVME_CLI=1 00:01:50.275 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:50.275 ++ SPDK_TEST_NVMF_NICS=e810 00:01:50.275 ++ SPDK_TEST_VFIOUSER=1 00:01:50.275 ++ SPDK_RUN_UBSAN=1 00:01:50.275 ++ NET_TYPE=phy 00:01:50.275 ++ RUN_NIGHTLY=0 00:01:50.275 + case $SPDK_TEST_NVMF_NICS in 00:01:50.275 + DRIVERS=ice 00:01:50.275 + [[ tcp == \r\d\m\a ]] 00:01:50.275 + [[ -n ice ]] 00:01:50.275 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:50.275 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:50.275 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:50.275 rmmod: ERROR: Module irdma is not currently loaded 00:01:50.275 rmmod: ERROR: Module i40iw is not currently loaded 00:01:50.275 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:50.275 + true 00:01:50.275 + for D in $DRIVERS 00:01:50.275 + sudo modprobe ice 00:01:50.275 + exit 0 00:01:50.286 [Pipeline] } 00:01:50.301 [Pipeline] // withEnv 00:01:50.307 [Pipeline] } 00:01:50.322 [Pipeline] // stage 00:01:50.335 [Pipeline] catchError 00:01:50.338 [Pipeline] { 00:01:50.353 [Pipeline] timeout 00:01:50.353 Timeout set to expire in 1 hr 0 min 00:01:50.355 [Pipeline] { 00:01:50.370 [Pipeline] stage 00:01:50.373 [Pipeline] { (Tests) 00:01:50.388 [Pipeline] sh 00:01:50.686 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.686 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.686 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.686 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:50.686 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.686 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.686 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:50.686 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.686 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:50.686 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:50.686 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:50.686 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:50.686 + source /etc/os-release 00:01:50.686 ++ NAME='Fedora Linux' 00:01:50.686 ++ VERSION='39 (Cloud Edition)' 00:01:50.686 ++ ID=fedora 00:01:50.686 ++ VERSION_ID=39 00:01:50.686 ++ VERSION_CODENAME= 00:01:50.686 ++ PLATFORM_ID=platform:f39 00:01:50.686 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:50.686 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.686 ++ LOGO=fedora-logo-icon 00:01:50.686 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:50.686 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.686 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:50.686 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.686 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.686 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.686 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:50.686 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.686 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:50.686 ++ SUPPORT_END=2024-11-12 00:01:50.686 ++ VARIANT='Cloud Edition' 00:01:50.686 ++ VARIANT_ID=cloud 00:01:50.686 + uname -a 00:01:50.686 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:50.686 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:53.990 Hugepages 00:01:53.990 node hugesize free / total 00:01:53.990 node0 1048576kB 0 / 0 00:01:53.990 node0 2048kB 0 / 0 00:01:53.990 node1 1048576kB 0 / 0 00:01:53.990 node1 2048kB 0 / 0 00:01:53.990 00:01:53.990 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.990 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:53.990 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:53.990 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:53.990 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:53.990 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:53.990 + rm -f /tmp/spdk-ld-path 00:01:53.990 + source autorun-spdk.conf 00:01:53.990 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.990 ++ SPDK_TEST_NVMF=1 00:01:53.990 ++ SPDK_TEST_NVME_CLI=1 00:01:53.990 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.990 ++ SPDK_TEST_NVMF_NICS=e810 00:01:53.990 ++ SPDK_TEST_VFIOUSER=1 00:01:53.990 ++ SPDK_RUN_UBSAN=1 00:01:53.990 ++ NET_TYPE=phy 00:01:53.990 ++ RUN_NIGHTLY=0 00:01:53.990 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.990 + [[ -n '' ]] 00:01:53.990 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.990 + for M in /var/spdk/build-*-manifest.txt 00:01:53.990 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:53.990 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.990 + for M in /var/spdk/build-*-manifest.txt 00:01:53.990 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.990 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.990 + for M in /var/spdk/build-*-manifest.txt 00:01:53.990 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.990 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.990 ++ uname 00:01:53.990 + [[ Linux == \L\i\n\u\x ]] 00:01:53.990 + sudo dmesg -T 00:01:53.990 + sudo dmesg --clear 00:01:53.990 + dmesg_pid=1805601 00:01:53.990 + [[ Fedora Linux == FreeBSD ]] 00:01:53.990 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.990 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.990 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.990 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.990 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.990 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.990 + sudo dmesg -Tw 00:01:53.990 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.990 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.990 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.990 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.990 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.990 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.990 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.990 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.990 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.990 18:13:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:53.991 18:13:48 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:53.991 18:13:48 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:53.991 18:13:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:53.991 18:13:48 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:54.253 18:13:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:54.253 18:13:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:54.253 18:13:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:54.253 18:13:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:54.253 18:13:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:54.253 18:13:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:54.253 18:13:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.253 18:13:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.253 18:13:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.253 18:13:48 -- paths/export.sh@5 -- $ export PATH 00:01:54.253 18:13:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:54.253 18:13:48 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:54.253 18:13:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:54.253 18:13:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733505228.XXXXXX 00:01:54.253 18:13:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733505228.67Jgj8 00:01:54.253 18:13:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:54.253 18:13:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:54.253 18:13:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:54.253 18:13:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:54.253 18:13:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:54.253 18:13:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:54.253 18:13:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:54.253 18:13:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.253 18:13:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:54.253 18:13:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:54.253 18:13:48 -- pm/common@17 -- $ local monitor 00:01:54.253 18:13:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.253 18:13:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.253 18:13:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.253 18:13:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:54.253 18:13:48 -- pm/common@21 -- $ date +%s 00:01:54.253 18:13:48 -- pm/common@25 -- $ sleep 1 00:01:54.253 18:13:48 -- pm/common@21 -- $ date +%s 00:01:54.253 18:13:48 -- pm/common@21 -- $ date +%s 00:01:54.253 18:13:48 -- pm/common@21 -- $ date +%s 00:01:54.253 18:13:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733505228 00:01:54.253 18:13:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733505228 00:01:54.253 18:13:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733505228 00:01:54.253 18:13:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733505228 00:01:54.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733505228_collect-cpu-load.pm.log 00:01:54.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733505228_collect-vmstat.pm.log 00:01:54.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733505228_collect-cpu-temp.pm.log 00:01:54.253 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733505228_collect-bmc-pm.bmc.pm.log 00:01:55.198 18:13:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:55.198 18:13:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.198 18:13:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.198 18:13:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.198 18:13:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.198 Fri Dec 6 05:13:49 PM UTC 2024 00:01:55.198 18:13:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.198 v25.01-pre-306-gc2471e450 00:01:55.198 18:13:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.198 18:13:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.198 18:13:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.198 18:13:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:55.198 18:13:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:55.198 18:13:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.198 ************************************ 00:01:55.198 START TEST ubsan 00:01:55.198 ************************************ 00:01:55.198 18:13:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:55.198 using ubsan 00:01:55.198 00:01:55.198 real 0m0.001s 00:01:55.198 user 0m0.000s 00:01:55.198 sys 0m0.001s 00:01:55.198 18:13:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:55.198 18:13:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:55.198 ************************************ 00:01:55.198 END TEST ubsan 00:01:55.198 ************************************ 00:01:55.459 18:13:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:55.459 18:13:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.459 18:13:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.459 18:13:50 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:55.459 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:55.459 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:56.032 Using 'verbs' RDMA provider 00:02:11.878 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:24.107 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:24.679 Creating mk/config.mk...done. 00:02:24.679 Creating mk/cc.flags.mk...done. 00:02:24.679 Type 'make' to build. 00:02:24.679 18:14:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:24.679 18:14:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:24.679 18:14:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:24.679 18:14:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.679 ************************************ 00:02:24.679 START TEST make 00:02:24.680 ************************************ 00:02:24.680 18:14:19 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:25.252 make[1]: Nothing to be done for 'all'. 00:02:26.639 The Meson build system 00:02:26.639 Version: 1.5.0 00:02:26.639 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:26.639 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.639 Build type: native build 00:02:26.639 Project name: libvfio-user 00:02:26.639 Project version: 0.0.1 00:02:26.639 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.639 C linker for the host machine: cc ld.bfd 2.40-14 00:02:26.639 Host machine cpu family: x86_64 00:02:26.639 Host machine cpu: x86_64 00:02:26.639 Run-time dependency threads found: YES 00:02:26.639 Library dl found: YES 00:02:26.639 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.639 Run-time dependency json-c found: YES 0.17 00:02:26.639 Run-time dependency cmocka found: YES 1.1.7 00:02:26.639 Program pytest-3 found: NO 00:02:26.639 Program flake8 found: NO 00:02:26.639 Program misspell-fixer found: NO 00:02:26.639 Program restructuredtext-lint found: NO 00:02:26.639 Program valgrind found: YES (/usr/bin/valgrind) 00:02:26.639 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.639 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.639 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.639 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.639 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:26.639 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:26.639 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.639 Build targets in project: 8 00:02:26.639 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:26.639 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:26.639 00:02:26.639 libvfio-user 0.0.1 00:02:26.639 00:02:26.639 User defined options 00:02:26.639 buildtype : debug 00:02:26.639 default_library: shared 00:02:26.639 libdir : /usr/local/lib 00:02:26.639 00:02:26.639 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.899 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.159 [1/37] Compiling C object samples/null.p/null.c.o 00:02:27.159 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:27.159 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:27.159 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:27.159 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:27.159 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:27.159 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:27.159 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:27.159 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:27.160 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:27.160 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:27.160 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:27.160 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:27.160 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:27.160 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:27.160 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:27.160 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:27.160 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:27.160 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:27.160 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:27.160 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:27.160 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:27.160 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:27.160 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:27.160 [25/37] Compiling C object samples/server.p/server.c.o 00:02:27.160 [26/37] Compiling C object samples/client.p/client.c.o 00:02:27.160 [27/37] Linking target samples/client 00:02:27.160 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:27.160 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:27.160 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:27.421 [31/37] Linking target test/unit_tests 00:02:27.421 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:27.421 [33/37] Linking target samples/server 00:02:27.421 [34/37] Linking target samples/gpio-pci-idio-16 00:02:27.421 [35/37] Linking target samples/null 00:02:27.421 [36/37] Linking target samples/lspci 00:02:27.421 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:27.421 INFO: autodetecting backend as ninja 00:02:27.421 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.421 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.994 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.994 ninja: no work to do. 00:02:34.585 The Meson build system 00:02:34.585 Version: 1.5.0 00:02:34.585 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:34.585 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:34.585 Build type: native build 00:02:34.585 Program cat found: YES (/usr/bin/cat) 00:02:34.585 Project name: DPDK 00:02:34.585 Project version: 24.03.0 00:02:34.585 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.585 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.585 Host machine cpu family: x86_64 00:02:34.585 Host machine cpu: x86_64 00:02:34.585 Message: ## Building in Developer Mode ## 00:02:34.585 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.585 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.585 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.585 Program python3 found: YES (/usr/bin/python3) 00:02:34.585 Program cat found: YES (/usr/bin/cat) 00:02:34.585 Compiler for C supports arguments -march=native: YES 00:02:34.585 Checking for size of "void *" : 8 00:02:34.585 Checking for size of "void *" : 8 (cached) 00:02:34.585 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.585 Library m found: YES 00:02:34.585 Library numa found: YES 00:02:34.585 Has header "numaif.h" : YES 00:02:34.585 Library fdt found: NO 00:02:34.585 Library execinfo found: NO 00:02:34.585 Has header "execinfo.h" : YES 00:02:34.585 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.585 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.585 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.585 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.585 Run-time dependency openssl found: YES 3.1.1 00:02:34.585 Run-time dependency libpcap found: YES 1.10.4 00:02:34.585 Has header "pcap.h" with dependency libpcap: YES 00:02:34.585 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.585 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.585 Compiler for C supports arguments -Wformat: YES 00:02:34.585 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.585 Compiler for C supports arguments -Wformat-security: NO 00:02:34.585 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.585 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.585 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.585 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.585 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.585 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.585 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.585 Compiler for C supports arguments -Wundef: YES 00:02:34.585 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.585 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.585 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.585 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.585 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.585 Program objdump found: YES (/usr/bin/objdump) 00:02:34.585 Compiler for C supports arguments -mavx512f: YES 00:02:34.585 Checking if "AVX512 checking" compiles: YES 00:02:34.585 Fetching value of define "__SSE4_2__" : 1 00:02:34.585 Fetching value of define "__AES__" : 1 00:02:34.585 Fetching value of define "__AVX__" : 1 00:02:34.585 Fetching value of define "__AVX2__" : 1 00:02:34.585 Fetching value of define "__AVX512BW__" : 1 00:02:34.585 Fetching value of define "__AVX512CD__" : 1 00:02:34.585 Fetching value of define "__AVX512DQ__" : 1 00:02:34.585 Fetching value of define "__AVX512F__" : 1 00:02:34.585 Fetching value of define "__AVX512VL__" : 1 00:02:34.585 Fetching value of define "__PCLMUL__" : 1 00:02:34.585 Fetching value of define "__RDRND__" : 1 00:02:34.585 Fetching value of define "__RDSEED__" : 1 00:02:34.585 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:34.585 Fetching value of define "__znver1__" : (undefined) 00:02:34.585 Fetching value of define "__znver2__" : (undefined) 00:02:34.585 Fetching value of define "__znver3__" : (undefined) 00:02:34.585 Fetching value of define "__znver4__" : (undefined) 00:02:34.585 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.585 Message: lib/log: Defining dependency "log" 00:02:34.585 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.585 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.585 Checking for function "getentropy" : NO 00:02:34.585 Message: lib/eal: Defining dependency "eal" 00:02:34.585 Message: lib/ring: Defining dependency "ring" 00:02:34.585 Message: lib/rcu: Defining dependency "rcu" 00:02:34.585 Message: lib/mempool: Defining dependency "mempool" 00:02:34.585 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.585 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.585 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.585 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.585 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.585 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.585 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:34.585 Compiler for C supports arguments -mpclmul: YES 00:02:34.585 Compiler for C supports arguments -maes: YES 00:02:34.585 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.585 Compiler for C supports arguments -mavx512bw: YES 00:02:34.585 Compiler for C supports arguments -mavx512dq: YES 00:02:34.585 Compiler for C supports arguments -mavx512vl: YES 00:02:34.585 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.585 Compiler for C supports arguments -mavx2: YES 00:02:34.585 Compiler for C supports arguments -mavx: YES 00:02:34.585 Message: lib/net: Defining dependency "net" 00:02:34.585 Message: lib/meter: Defining dependency "meter" 00:02:34.585 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.585 Message: lib/pci: Defining dependency "pci" 00:02:34.585 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.585 Message: lib/hash: Defining dependency "hash" 00:02:34.585 Message: lib/timer: Defining dependency "timer" 00:02:34.585 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.585 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.586 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.586 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.586 Message: lib/power: Defining dependency "power" 00:02:34.586 Message: lib/reorder: Defining dependency "reorder" 00:02:34.586 Message: lib/security: Defining dependency "security" 00:02:34.586 Has header "linux/userfaultfd.h" : YES 00:02:34.586 Has header "linux/vduse.h" : YES 00:02:34.586 Message: lib/vhost: Defining dependency "vhost" 00:02:34.586 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.586 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.586 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.586 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.586 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.586 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.586 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.586 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.586 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.586 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.586 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.586 Configuring doxy-api-html.conf using configuration 00:02:34.586 Configuring doxy-api-man.conf using configuration 00:02:34.586 Program mandb found: YES (/usr/bin/mandb) 00:02:34.586 Program sphinx-build found: NO 00:02:34.586 Configuring rte_build_config.h using configuration 00:02:34.586 Message: 00:02:34.586 ================= 00:02:34.586 Applications Enabled 00:02:34.586 ================= 00:02:34.586 00:02:34.586 apps: 00:02:34.586 00:02:34.586 00:02:34.586 Message: 00:02:34.586 ================= 00:02:34.586 Libraries Enabled 00:02:34.586 ================= 00:02:34.586 00:02:34.586 libs: 00:02:34.586 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.586 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.586 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.586 00:02:34.586 Message: 00:02:34.586 =============== 00:02:34.586 Drivers Enabled 00:02:34.586 =============== 00:02:34.586 00:02:34.586 common: 00:02:34.586 00:02:34.586 bus: 00:02:34.586 pci, vdev, 00:02:34.586 mempool: 00:02:34.586 ring, 00:02:34.586 dma: 00:02:34.586 00:02:34.586 net: 00:02:34.586 00:02:34.586 crypto: 00:02:34.586 00:02:34.586 compress: 00:02:34.586 00:02:34.586 vdpa: 00:02:34.586 00:02:34.586 00:02:34.586 Message: 00:02:34.586 ================= 00:02:34.586 Content Skipped 00:02:34.586 ================= 00:02:34.586 00:02:34.586 apps: 00:02:34.586 dumpcap: explicitly disabled via build config 00:02:34.586 graph: explicitly disabled via build config 00:02:34.586 pdump: explicitly disabled via build config 00:02:34.586 proc-info: explicitly disabled via build config 00:02:34.586 test-acl: explicitly disabled via build config 00:02:34.586 test-bbdev: explicitly disabled via build config 00:02:34.586 test-cmdline: explicitly disabled via build config 00:02:34.586 test-compress-perf: explicitly disabled via build config 00:02:34.586 test-crypto-perf: explicitly disabled via build config 00:02:34.586 test-dma-perf: explicitly disabled via build config 00:02:34.586 test-eventdev: explicitly disabled via build config 00:02:34.586 test-fib: explicitly disabled via build config 00:02:34.586 test-flow-perf: explicitly disabled via build config 00:02:34.586 test-gpudev: explicitly disabled via build config 00:02:34.586 test-mldev: explicitly disabled via build config 00:02:34.586 test-pipeline: explicitly disabled via build config 00:02:34.586 test-pmd: explicitly disabled via build config 00:02:34.586 test-regex: explicitly disabled via build config 00:02:34.586 test-sad: explicitly disabled via build config 00:02:34.586 test-security-perf: explicitly disabled via build config 00:02:34.586 00:02:34.586 libs: 00:02:34.586 argparse: explicitly disabled via build config 00:02:34.586 metrics: explicitly disabled via build config 00:02:34.586 acl: explicitly disabled via build config 00:02:34.586 bbdev: explicitly disabled via build config 00:02:34.586 bitratestats: explicitly disabled via build config 00:02:34.586 bpf: explicitly disabled via build config 00:02:34.586 cfgfile: explicitly disabled via build config 00:02:34.586 distributor: explicitly disabled via build config 00:02:34.586 efd: explicitly disabled via build config 00:02:34.586 eventdev: explicitly disabled via build config 00:02:34.586 dispatcher: explicitly disabled via build config 00:02:34.586 gpudev: explicitly disabled via build config 00:02:34.586 gro: explicitly disabled via build config 00:02:34.586 gso: explicitly disabled via build config 00:02:34.586 ip_frag: explicitly disabled via build config 00:02:34.586 jobstats: explicitly disabled via build config 00:02:34.586 latencystats: explicitly disabled via build config 00:02:34.586 lpm: explicitly disabled via build config 00:02:34.586 member: explicitly disabled via build config 00:02:34.586 pcapng: explicitly disabled via build config 00:02:34.586 rawdev: explicitly disabled via build config 00:02:34.586 regexdev: explicitly disabled via build config 00:02:34.586 mldev: explicitly disabled via build config 00:02:34.586 rib: explicitly disabled via build config 00:02:34.586 sched: explicitly disabled via build config 00:02:34.586 stack: explicitly disabled via build config 00:02:34.586 ipsec: explicitly disabled via build config 00:02:34.586 pdcp: explicitly disabled via build config 00:02:34.586 fib: explicitly disabled via build config 00:02:34.586 port: explicitly disabled via build config 00:02:34.586 pdump: explicitly disabled via build config 00:02:34.586 table: explicitly disabled via build config 00:02:34.586 pipeline: explicitly disabled via build config 00:02:34.586 graph: explicitly disabled via build config 00:02:34.586 node: explicitly disabled via build config 00:02:34.586 00:02:34.586 drivers: 00:02:34.586 common/cpt: not in enabled drivers build config 00:02:34.586 common/dpaax: not in enabled drivers build config 00:02:34.586 common/iavf: not in enabled drivers build config 00:02:34.586 common/idpf: not in enabled drivers build config 00:02:34.586 common/ionic: not in enabled drivers build config 00:02:34.586 common/mvep: not in enabled drivers build config 00:02:34.586 common/octeontx: not in enabled drivers build config 00:02:34.586 bus/auxiliary: not in enabled drivers build config 00:02:34.586 bus/cdx: not in enabled drivers build config 00:02:34.586 bus/dpaa: not in enabled drivers build config 00:02:34.586 bus/fslmc: not in enabled drivers build config 00:02:34.586 bus/ifpga: not in enabled drivers build config 00:02:34.586 bus/platform: not in enabled drivers build config 00:02:34.586 bus/uacce: not in enabled drivers build config 00:02:34.586 bus/vmbus: not in enabled drivers build config 00:02:34.586 common/cnxk: not in enabled drivers build config 00:02:34.586 common/mlx5: not in enabled drivers build config 00:02:34.586 common/nfp: not in enabled drivers build config 00:02:34.586 common/nitrox: not in enabled drivers build config 00:02:34.586 common/qat: not in enabled drivers build config 00:02:34.586 common/sfc_efx: not in enabled drivers build config 00:02:34.586 mempool/bucket: not in enabled drivers build config 00:02:34.586 mempool/cnxk: not in enabled drivers build config 00:02:34.586 mempool/dpaa: not in enabled drivers build config 00:02:34.586 mempool/dpaa2: not in enabled drivers build config 00:02:34.586 mempool/octeontx: not in enabled drivers build config 00:02:34.586 mempool/stack: not in enabled drivers build config 00:02:34.586 dma/cnxk: not in enabled drivers build config 00:02:34.586 dma/dpaa: not in enabled drivers build config 00:02:34.586 dma/dpaa2: not in enabled drivers build config 00:02:34.586 dma/hisilicon: not in enabled drivers build config 00:02:34.586 dma/idxd: not in enabled drivers build config 00:02:34.586 dma/ioat: not in enabled drivers build config 00:02:34.586 dma/skeleton: not in enabled drivers build config 00:02:34.586 net/af_packet: not in enabled drivers build config 00:02:34.586 net/af_xdp: not in enabled drivers build config 00:02:34.586 net/ark: not in enabled drivers build config 00:02:34.586 net/atlantic: not in enabled drivers build config 00:02:34.586 net/avp: not in enabled drivers build config 00:02:34.586 net/axgbe: not in enabled drivers build config 00:02:34.586 net/bnx2x: not in enabled drivers build config 00:02:34.586 net/bnxt: not in enabled drivers build config 00:02:34.586 net/bonding: not in enabled drivers build config 00:02:34.586 net/cnxk: not in enabled drivers build config 00:02:34.586 net/cpfl: not in enabled drivers build config 00:02:34.586 net/cxgbe: not in enabled drivers build config 00:02:34.586 net/dpaa: not in enabled drivers build config 00:02:34.586 net/dpaa2: not in enabled drivers build config 00:02:34.586 net/e1000: not in enabled drivers build config 00:02:34.586 net/ena: not in enabled drivers build config 00:02:34.586 net/enetc: not in enabled drivers build config 00:02:34.586 net/enetfec: not in enabled drivers build config 00:02:34.586 net/enic: not in enabled drivers build config 00:02:34.586 net/failsafe: not in enabled drivers build config 00:02:34.586 net/fm10k: not in enabled drivers build config 00:02:34.586 net/gve: not in enabled drivers build config 00:02:34.586 net/hinic: not in enabled drivers build config 00:02:34.586 net/hns3: not in enabled drivers build config 00:02:34.586 net/i40e: not in enabled drivers build config 00:02:34.586 net/iavf: not in enabled drivers build config 00:02:34.586 net/ice: not in enabled drivers build config 00:02:34.586 net/idpf: not in enabled drivers build config 00:02:34.586 net/igc: not in enabled drivers build config 00:02:34.586 net/ionic: not in enabled drivers build config 00:02:34.586 net/ipn3ke: not in enabled drivers build config 00:02:34.586 net/ixgbe: not in enabled drivers build config 00:02:34.586 net/mana: not in enabled drivers build config 00:02:34.586 net/memif: not in enabled drivers build config 00:02:34.586 net/mlx4: not in enabled drivers build config 00:02:34.586 net/mlx5: not in enabled drivers build config 00:02:34.586 net/mvneta: not in enabled drivers build config 00:02:34.586 net/mvpp2: not in enabled drivers build config 00:02:34.586 net/netvsc: not in enabled drivers build config 00:02:34.586 net/nfb: not in enabled drivers build config 00:02:34.587 net/nfp: not in enabled drivers build config 00:02:34.587 net/ngbe: not in enabled drivers build config 00:02:34.587 net/null: not in enabled drivers build config 00:02:34.587 net/octeontx: not in enabled drivers build config 00:02:34.587 net/octeon_ep: not in enabled drivers build config 00:02:34.587 net/pcap: not in enabled drivers build config 00:02:34.587 net/pfe: not in enabled drivers build config 00:02:34.587 net/qede: not in enabled drivers build config 00:02:34.587 net/ring: not in enabled drivers build config 00:02:34.587 net/sfc: not in enabled drivers build config 00:02:34.587 net/softnic: not in enabled drivers build config 00:02:34.587 net/tap: not in enabled drivers build config 00:02:34.587 net/thunderx: not in enabled drivers build config 00:02:34.587 net/txgbe: not in enabled drivers build config 00:02:34.587 net/vdev_netvsc: not in enabled drivers build config 00:02:34.587 net/vhost: not in enabled drivers build config 00:02:34.587 net/virtio: not in enabled drivers build config 00:02:34.587 net/vmxnet3: not in enabled drivers build config 00:02:34.587 raw/*: missing internal dependency, "rawdev" 00:02:34.587 crypto/armv8: not in enabled drivers build config 00:02:34.587 crypto/bcmfs: not in enabled drivers build config 00:02:34.587 crypto/caam_jr: not in enabled drivers build config 00:02:34.587 crypto/ccp: not in enabled drivers build config 00:02:34.587 crypto/cnxk: not in enabled drivers build config 00:02:34.587 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.587 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.587 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.587 crypto/mlx5: not in enabled drivers build config 00:02:34.587 crypto/mvsam: not in enabled drivers build config 00:02:34.587 crypto/nitrox: not in enabled drivers build config 00:02:34.587 crypto/null: not in enabled drivers build config 00:02:34.587 crypto/octeontx: not in enabled drivers build config 00:02:34.587 crypto/openssl: not in enabled drivers build config 00:02:34.587 crypto/scheduler: not in enabled drivers build config 00:02:34.587 crypto/uadk: not in enabled drivers build config 00:02:34.587 crypto/virtio: not in enabled drivers build config 00:02:34.587 compress/isal: not in enabled drivers build config 00:02:34.587 compress/mlx5: not in enabled drivers build config 00:02:34.587 compress/nitrox: not in enabled drivers build config 00:02:34.587 compress/octeontx: not in enabled drivers build config 00:02:34.587 compress/zlib: not in enabled drivers build config 00:02:34.587 regex/*: missing internal dependency, "regexdev" 00:02:34.587 ml/*: missing internal dependency, "mldev" 00:02:34.587 vdpa/ifc: not in enabled drivers build config 00:02:34.587 vdpa/mlx5: not in enabled drivers build config 00:02:34.587 vdpa/nfp: not in enabled drivers build config 00:02:34.587 vdpa/sfc: not in enabled drivers build config 00:02:34.587 event/*: missing internal dependency, "eventdev" 00:02:34.587 baseband/*: missing internal dependency, "bbdev" 00:02:34.587 gpu/*: missing internal dependency, "gpudev" 00:02:34.587 00:02:34.587 00:02:34.587 Build targets in project: 84 00:02:34.587 00:02:34.587 DPDK 24.03.0 00:02:34.587 00:02:34.587 User defined options 00:02:34.587 buildtype : debug 00:02:34.587 default_library : shared 00:02:34.587 libdir : lib 00:02:34.587 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:34.587 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.587 c_link_args : 00:02:34.587 cpu_instruction_set: native 00:02:34.587 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:34.587 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:34.587 enable_docs : false 00:02:34.587 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:34.587 enable_kmods : false 00:02:34.587 max_lcores : 128 00:02:34.587 tests : false 00:02:34.587 00:02:34.587 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.587 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:34.587 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.587 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.587 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.587 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.587 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.587 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.587 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.587 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.587 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.587 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.587 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.587 [12/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.587 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.587 [14/267] Linking static target lib/librte_kvargs.a 00:02:34.587 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.587 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.587 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.587 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.587 [19/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.587 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.587 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.587 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.587 [23/267] Linking static target lib/librte_log.a 00:02:34.587 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.587 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.587 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.587 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.587 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.587 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.587 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.587 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.587 [32/267] Linking static target lib/librte_pci.a 00:02:34.845 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.845 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.845 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.845 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.845 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.845 [38/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.845 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:34.845 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:35.105 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.105 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.105 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:35.105 [44/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:35.105 [45/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:35.105 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:35.105 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:35.105 [48/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:35.105 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:35.105 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:35.105 [51/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.105 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:35.105 [53/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.105 [54/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:35.105 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.105 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.105 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:35.105 [58/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.105 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:35.105 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:35.105 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:35.105 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:35.105 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.105 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:35.105 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:35.105 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.105 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.105 [68/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:35.105 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.105 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:35.105 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:35.105 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.105 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.105 [74/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.105 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:35.105 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:35.105 [77/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:35.105 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:35.105 [79/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.105 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.105 [81/267] Linking static target lib/librte_timer.a 00:02:35.105 [82/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.105 [83/267] Linking static target lib/librte_telemetry.a 00:02:35.105 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.105 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.105 [86/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.105 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.105 [88/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.105 [89/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:35.105 [90/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:35.105 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.105 [92/267] Linking static target lib/librte_meter.a 00:02:35.105 [93/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.105 [94/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:35.105 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.105 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.105 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.105 [98/267] Linking static target lib/librte_rcu.a 00:02:35.105 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.105 [100/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.105 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.105 [102/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.105 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:35.105 [104/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.105 [105/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.105 [106/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.105 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.105 [108/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.105 [109/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.105 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.105 [111/267] Linking static target lib/librte_ring.a 00:02:35.105 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.105 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:35.105 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.105 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:35.105 [116/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.105 [117/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.105 [118/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.105 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.105 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.105 [121/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.105 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.105 [123/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:35.105 [124/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.105 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.105 [126/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.105 [127/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.105 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.105 [129/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.105 [130/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.105 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.105 [132/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.105 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.106 [134/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.106 [135/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.106 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.106 [137/267] Linking static target lib/librte_cmdline.a 00:02:35.106 [138/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.106 [139/267] Linking static target lib/librte_power.a 00:02:35.106 [140/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.106 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.106 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.106 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.106 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.106 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.106 [146/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.106 [147/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.106 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.106 [149/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.106 [150/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.365 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.365 [152/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.365 [153/267] Linking static target lib/librte_compressdev.a 00:02:35.365 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.365 [155/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.365 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.365 [157/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.365 [158/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.365 [159/267] Linking static target lib/librte_dmadev.a 00:02:35.365 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.365 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.365 [162/267] Linking target lib/librte_log.so.24.1 00:02:35.365 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.365 [164/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.365 [165/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.365 [166/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:35.365 [167/267] Linking static target lib/librte_mempool.a 00:02:35.365 [168/267] Linking static target lib/librte_net.a 00:02:35.365 [169/267] Linking static target lib/librte_eal.a 00:02:35.365 [170/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.365 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.365 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.365 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.365 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.365 [175/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.365 [176/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.365 [177/267] Linking static target lib/librte_security.a 00:02:35.365 [178/267] Linking static target lib/librte_reorder.a 00:02:35.365 [179/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.365 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.365 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.365 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.365 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.365 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.365 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:35.365 [186/267] Linking static target lib/librte_mbuf.a 00:02:35.365 [187/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.365 [188/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.365 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.365 [190/267] Linking static target lib/librte_hash.a 00:02:35.365 [191/267] Linking static target drivers/librte_bus_vdev.a 00:02:35.365 [192/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.365 [193/267] Linking target lib/librte_kvargs.so.24.1 00:02:35.625 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.625 [195/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.625 [196/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.625 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.625 [198/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.625 [199/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.625 [200/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.625 [201/267] Linking static target drivers/librte_mempool_ring.a 00:02:35.625 [202/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.625 [203/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.625 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.625 [205/267] Linking static target drivers/librte_bus_pci.a 00:02:35.625 [206/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.625 [207/267] Linking static target lib/librte_cryptodev.a 00:02:35.625 [208/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.625 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.625 [210/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.625 [211/267] Linking target lib/librte_telemetry.so.24.1 00:02:35.885 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.885 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.885 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.144 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.144 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.144 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.144 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.144 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.144 [220/267] Linking static target lib/librte_ethdev.a 00:02:36.144 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.405 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.666 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.927 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:36.927 [228/267] Linking static target lib/librte_vhost.a 00:02:37.872 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.260 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.851 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.793 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.793 [233/267] Linking target lib/librte_eal.so.24.1 00:02:47.052 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.052 [235/267] Linking target lib/librte_meter.so.24.1 00:02:47.052 [236/267] Linking target lib/librte_ring.so.24.1 00:02:47.052 [237/267] Linking target lib/librte_timer.so.24.1 00:02:47.052 [238/267] Linking target lib/librte_pci.so.24.1 00:02:47.052 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.052 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:47.052 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.052 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.052 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.052 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.052 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.052 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.312 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:47.312 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:47.312 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.312 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.312 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:47.312 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:47.588 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:47.588 [254/267] Linking target lib/librte_reorder.so.24.1 00:02:47.588 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:47.588 [256/267] Linking target lib/librte_net.so.24.1 00:02:47.588 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:47.588 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:47.588 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:47.848 [260/267] Linking target lib/librte_hash.so.24.1 00:02:47.848 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:47.848 [262/267] Linking target lib/librte_security.so.24.1 00:02:47.848 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:47.848 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:47.848 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:47.848 [266/267] Linking target lib/librte_power.so.24.1 00:02:47.848 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:47.848 INFO: autodetecting backend as ninja 00:02:47.848 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:51.146 CC lib/ut/ut.o 00:02:51.146 CC lib/ut_mock/mock.o 00:02:51.146 CC lib/log/log.o 00:02:51.146 CC lib/log/log_flags.o 00:02:51.146 CC lib/log/log_deprecated.o 00:02:51.407 LIB libspdk_ut.a 00:02:51.407 LIB libspdk_ut_mock.a 00:02:51.407 LIB libspdk_log.a 00:02:51.407 SO libspdk_ut.so.2.0 00:02:51.407 SO libspdk_ut_mock.so.6.0 00:02:51.407 SO libspdk_log.so.7.1 00:02:51.407 SYMLINK libspdk_ut.so 00:02:51.407 SYMLINK libspdk_ut_mock.so 00:02:51.407 SYMLINK libspdk_log.so 00:02:51.978 CC lib/util/base64.o 00:02:51.978 CC lib/util/bit_array.o 00:02:51.978 CC lib/ioat/ioat.o 00:02:51.978 CC lib/util/cpuset.o 00:02:51.978 CC lib/util/crc16.o 00:02:51.978 CC lib/dma/dma.o 00:02:51.978 CC lib/util/crc32.o 00:02:51.978 CXX lib/trace_parser/trace.o 00:02:51.978 CC lib/util/crc32c.o 00:02:51.978 CC lib/util/crc32_ieee.o 00:02:51.978 CC lib/util/crc64.o 00:02:51.978 CC lib/util/dif.o 00:02:51.978 CC lib/util/fd.o 00:02:51.978 CC lib/util/fd_group.o 00:02:51.978 CC lib/util/file.o 00:02:51.978 CC lib/util/hexlify.o 00:02:51.978 CC lib/util/iov.o 00:02:51.978 CC lib/util/math.o 00:02:51.978 CC lib/util/net.o 00:02:51.978 CC lib/util/pipe.o 00:02:51.978 CC lib/util/strerror_tls.o 00:02:51.978 CC lib/util/string.o 00:02:51.978 CC lib/util/uuid.o 00:02:51.978 CC lib/util/xor.o 00:02:51.978 CC lib/util/zipf.o 00:02:51.978 CC lib/util/md5.o 00:02:51.978 CC lib/vfio_user/host/vfio_user_pci.o 00:02:51.978 CC lib/vfio_user/host/vfio_user.o 00:02:52.239 LIB libspdk_dma.a 00:02:52.239 SO libspdk_dma.so.5.0 00:02:52.239 LIB libspdk_ioat.a 00:02:52.239 SO libspdk_ioat.so.7.0 00:02:52.239 SYMLINK libspdk_dma.so 00:02:52.239 SYMLINK libspdk_ioat.so 00:02:52.239 LIB libspdk_vfio_user.a 00:02:52.239 SO libspdk_vfio_user.so.5.0 00:02:52.500 LIB libspdk_util.a 00:02:52.500 SYMLINK libspdk_vfio_user.so 00:02:52.500 SO libspdk_util.so.10.1 00:02:52.761 SYMLINK libspdk_util.so 00:02:52.761 LIB libspdk_trace_parser.a 00:02:52.761 SO libspdk_trace_parser.so.6.0 00:02:52.761 SYMLINK libspdk_trace_parser.so 00:02:53.022 CC lib/env_dpdk/env.o 00:02:53.022 CC lib/env_dpdk/memory.o 00:02:53.022 CC lib/env_dpdk/pci.o 00:02:53.022 CC lib/env_dpdk/init.o 00:02:53.022 CC lib/conf/conf.o 00:02:53.022 CC lib/env_dpdk/threads.o 00:02:53.022 CC lib/json/json_parse.o 00:02:53.022 CC lib/vmd/vmd.o 00:02:53.022 CC lib/env_dpdk/pci_ioat.o 00:02:53.022 CC lib/idxd/idxd.o 00:02:53.022 CC lib/json/json_util.o 00:02:53.022 CC lib/env_dpdk/pci_virtio.o 00:02:53.022 CC lib/idxd/idxd_user.o 00:02:53.022 CC lib/rdma_utils/rdma_utils.o 00:02:53.022 CC lib/vmd/led.o 00:02:53.022 CC lib/env_dpdk/pci_vmd.o 00:02:53.022 CC lib/json/json_write.o 00:02:53.022 CC lib/idxd/idxd_kernel.o 00:02:53.022 CC lib/env_dpdk/pci_idxd.o 00:02:53.022 CC lib/env_dpdk/pci_event.o 00:02:53.022 CC lib/env_dpdk/sigbus_handler.o 00:02:53.022 CC lib/env_dpdk/pci_dpdk.o 00:02:53.022 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.022 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.285 LIB libspdk_conf.a 00:02:53.285 SO libspdk_conf.so.6.0 00:02:53.285 LIB libspdk_rdma_utils.a 00:02:53.285 LIB libspdk_json.a 00:02:53.285 SO libspdk_rdma_utils.so.1.0 00:02:53.285 SO libspdk_json.so.6.0 00:02:53.285 SYMLINK libspdk_conf.so 00:02:53.546 SYMLINK libspdk_rdma_utils.so 00:02:53.546 SYMLINK libspdk_json.so 00:02:53.546 LIB libspdk_idxd.a 00:02:53.546 SO libspdk_idxd.so.12.1 00:02:53.546 LIB libspdk_vmd.a 00:02:53.807 SO libspdk_vmd.so.6.0 00:02:53.807 SYMLINK libspdk_idxd.so 00:02:53.807 SYMLINK libspdk_vmd.so 00:02:53.807 CC lib/rdma_provider/common.o 00:02:53.807 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:53.807 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.807 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.807 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.807 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:54.068 LIB libspdk_rdma_provider.a 00:02:54.068 SO libspdk_rdma_provider.so.7.0 00:02:54.068 LIB libspdk_jsonrpc.a 00:02:54.068 SYMLINK libspdk_rdma_provider.so 00:02:54.068 SO libspdk_jsonrpc.so.6.0 00:02:54.328 SYMLINK libspdk_jsonrpc.so 00:02:54.328 LIB libspdk_env_dpdk.a 00:02:54.328 SO libspdk_env_dpdk.so.15.1 00:02:54.588 SYMLINK libspdk_env_dpdk.so 00:02:54.588 CC lib/rpc/rpc.o 00:02:54.850 LIB libspdk_rpc.a 00:02:54.850 SO libspdk_rpc.so.6.0 00:02:54.850 SYMLINK libspdk_rpc.so 00:02:55.110 CC lib/keyring/keyring.o 00:02:55.110 CC lib/keyring/keyring_rpc.o 00:02:55.110 CC lib/trace/trace.o 00:02:55.110 CC lib/trace/trace_flags.o 00:02:55.110 CC lib/trace/trace_rpc.o 00:02:55.372 CC lib/notify/notify.o 00:02:55.372 CC lib/notify/notify_rpc.o 00:02:55.372 LIB libspdk_notify.a 00:02:55.372 SO libspdk_notify.so.6.0 00:02:55.372 LIB libspdk_trace.a 00:02:55.633 LIB libspdk_keyring.a 00:02:55.633 SO libspdk_trace.so.11.0 00:02:55.633 SYMLINK libspdk_notify.so 00:02:55.633 SO libspdk_keyring.so.2.0 00:02:55.633 SYMLINK libspdk_trace.so 00:02:55.633 SYMLINK libspdk_keyring.so 00:02:55.893 CC lib/thread/thread.o 00:02:55.893 CC lib/thread/iobuf.o 00:02:55.893 CC lib/sock/sock.o 00:02:55.893 CC lib/sock/sock_rpc.o 00:02:56.464 LIB libspdk_sock.a 00:02:56.464 SO libspdk_sock.so.10.0 00:02:56.464 SYMLINK libspdk_sock.so 00:02:56.726 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.726 CC lib/nvme/nvme_ctrlr.o 00:02:56.726 CC lib/nvme/nvme_fabric.o 00:02:56.726 CC lib/nvme/nvme_ns_cmd.o 00:02:56.726 CC lib/nvme/nvme_ns.o 00:02:56.726 CC lib/nvme/nvme_pcie_common.o 00:02:56.726 CC lib/nvme/nvme_pcie.o 00:02:56.726 CC lib/nvme/nvme_qpair.o 00:02:56.726 CC lib/nvme/nvme.o 00:02:56.726 CC lib/nvme/nvme_quirks.o 00:02:56.726 CC lib/nvme/nvme_transport.o 00:02:56.726 CC lib/nvme/nvme_discovery.o 00:02:56.726 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:56.726 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:56.726 CC lib/nvme/nvme_tcp.o 00:02:56.726 CC lib/nvme/nvme_opal.o 00:02:56.726 CC lib/nvme/nvme_io_msg.o 00:02:56.726 CC lib/nvme/nvme_poll_group.o 00:02:56.726 CC lib/nvme/nvme_zns.o 00:02:56.726 CC lib/nvme/nvme_stubs.o 00:02:56.726 CC lib/nvme/nvme_auth.o 00:02:56.726 CC lib/nvme/nvme_cuse.o 00:02:56.726 CC lib/nvme/nvme_vfio_user.o 00:02:56.726 CC lib/nvme/nvme_rdma.o 00:02:57.299 LIB libspdk_thread.a 00:02:57.299 SO libspdk_thread.so.11.0 00:02:57.560 SYMLINK libspdk_thread.so 00:02:57.822 CC lib/vfu_tgt/tgt_endpoint.o 00:02:57.822 CC lib/vfu_tgt/tgt_rpc.o 00:02:57.822 CC lib/blob/blobstore.o 00:02:57.822 CC lib/blob/request.o 00:02:57.822 CC lib/blob/zeroes.o 00:02:57.822 CC lib/blob/blob_bs_dev.o 00:02:57.822 CC lib/fsdev/fsdev.o 00:02:57.822 CC lib/fsdev/fsdev_io.o 00:02:57.822 CC lib/fsdev/fsdev_rpc.o 00:02:57.822 CC lib/virtio/virtio.o 00:02:57.822 CC lib/virtio/virtio_vhost_user.o 00:02:57.822 CC lib/init/json_config.o 00:02:57.822 CC lib/virtio/virtio_vfio_user.o 00:02:57.822 CC lib/virtio/virtio_pci.o 00:02:57.822 CC lib/init/subsystem.o 00:02:57.822 CC lib/init/subsystem_rpc.o 00:02:57.822 CC lib/init/rpc.o 00:02:57.822 CC lib/accel/accel.o 00:02:57.822 CC lib/accel/accel_rpc.o 00:02:57.822 CC lib/accel/accel_sw.o 00:02:58.083 LIB libspdk_init.a 00:02:58.083 LIB libspdk_vfu_tgt.a 00:02:58.083 SO libspdk_init.so.6.0 00:02:58.083 LIB libspdk_virtio.a 00:02:58.083 SO libspdk_vfu_tgt.so.3.0 00:02:58.083 SO libspdk_virtio.so.7.0 00:02:58.345 SYMLINK libspdk_init.so 00:02:58.345 SYMLINK libspdk_vfu_tgt.so 00:02:58.345 SYMLINK libspdk_virtio.so 00:02:58.345 LIB libspdk_fsdev.a 00:02:58.345 SO libspdk_fsdev.so.2.0 00:02:58.608 SYMLINK libspdk_fsdev.so 00:02:58.608 CC lib/event/app.o 00:02:58.608 CC lib/event/reactor.o 00:02:58.608 CC lib/event/log_rpc.o 00:02:58.608 CC lib/event/app_rpc.o 00:02:58.608 CC lib/event/scheduler_static.o 00:02:58.869 LIB libspdk_accel.a 00:02:58.869 LIB libspdk_nvme.a 00:02:58.869 SO libspdk_accel.so.16.0 00:02:58.869 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:58.869 SYMLINK libspdk_accel.so 00:02:58.869 SO libspdk_nvme.so.15.0 00:02:58.869 LIB libspdk_event.a 00:02:59.130 SO libspdk_event.so.14.0 00:02:59.130 SYMLINK libspdk_event.so 00:02:59.130 SYMLINK libspdk_nvme.so 00:02:59.391 CC lib/bdev/bdev.o 00:02:59.391 CC lib/bdev/bdev_rpc.o 00:02:59.391 CC lib/bdev/bdev_zone.o 00:02:59.391 CC lib/bdev/part.o 00:02:59.391 CC lib/bdev/scsi_nvme.o 00:02:59.391 LIB libspdk_fuse_dispatcher.a 00:02:59.652 SO libspdk_fuse_dispatcher.so.1.0 00:02:59.652 SYMLINK libspdk_fuse_dispatcher.so 00:03:00.620 LIB libspdk_blob.a 00:03:00.620 SO libspdk_blob.so.12.0 00:03:00.620 SYMLINK libspdk_blob.so 00:03:00.964 CC lib/lvol/lvol.o 00:03:00.964 CC lib/blobfs/blobfs.o 00:03:00.964 CC lib/blobfs/tree.o 00:03:01.575 LIB libspdk_bdev.a 00:03:01.836 SO libspdk_bdev.so.17.0 00:03:01.836 LIB libspdk_blobfs.a 00:03:01.836 SO libspdk_blobfs.so.11.0 00:03:01.836 LIB libspdk_lvol.a 00:03:01.836 SYMLINK libspdk_bdev.so 00:03:01.836 SO libspdk_lvol.so.11.0 00:03:01.836 SYMLINK libspdk_blobfs.so 00:03:01.836 SYMLINK libspdk_lvol.so 00:03:02.095 CC lib/scsi/dev.o 00:03:02.095 CC lib/scsi/lun.o 00:03:02.095 CC lib/nvmf/ctrlr.o 00:03:02.095 CC lib/scsi/port.o 00:03:02.095 CC lib/nvmf/ctrlr_discovery.o 00:03:02.095 CC lib/nvmf/ctrlr_bdev.o 00:03:02.095 CC lib/scsi/scsi.o 00:03:02.095 CC lib/nvmf/subsystem.o 00:03:02.096 CC lib/scsi/scsi_bdev.o 00:03:02.096 CC lib/nbd/nbd.o 00:03:02.096 CC lib/nvmf/nvmf.o 00:03:02.096 CC lib/nvmf/nvmf_rpc.o 00:03:02.096 CC lib/scsi/scsi_pr.o 00:03:02.096 CC lib/nbd/nbd_rpc.o 00:03:02.096 CC lib/nvmf/transport.o 00:03:02.096 CC lib/scsi/scsi_rpc.o 00:03:02.096 CC lib/ublk/ublk.o 00:03:02.096 CC lib/nvmf/tcp.o 00:03:02.096 CC lib/scsi/task.o 00:03:02.096 CC lib/nvmf/stubs.o 00:03:02.096 CC lib/ublk/ublk_rpc.o 00:03:02.096 CC lib/nvmf/mdns_server.o 00:03:02.096 CC lib/nvmf/vfio_user.o 00:03:02.096 CC lib/ftl/ftl_core.o 00:03:02.096 CC lib/nvmf/rdma.o 00:03:02.096 CC lib/ftl/ftl_init.o 00:03:02.096 CC lib/nvmf/auth.o 00:03:02.096 CC lib/ftl/ftl_layout.o 00:03:02.096 CC lib/ftl/ftl_debug.o 00:03:02.096 CC lib/ftl/ftl_io.o 00:03:02.096 CC lib/ftl/ftl_sb.o 00:03:02.096 CC lib/ftl/ftl_l2p.o 00:03:02.096 CC lib/ftl/ftl_l2p_flat.o 00:03:02.096 CC lib/ftl/ftl_nv_cache.o 00:03:02.096 CC lib/ftl/ftl_band.o 00:03:02.096 CC lib/ftl/ftl_band_ops.o 00:03:02.096 CC lib/ftl/ftl_writer.o 00:03:02.096 CC lib/ftl/ftl_rq.o 00:03:02.096 CC lib/ftl/ftl_reloc.o 00:03:02.096 CC lib/ftl/ftl_l2p_cache.o 00:03:02.096 CC lib/ftl/ftl_p2l.o 00:03:02.096 CC lib/ftl/ftl_p2l_log.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.096 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.355 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:02.355 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.355 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.355 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:02.355 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:02.355 CC lib/ftl/utils/ftl_conf.o 00:03:02.355 CC lib/ftl/utils/ftl_md.o 00:03:02.355 CC lib/ftl/utils/ftl_mempool.o 00:03:02.355 CC lib/ftl/utils/ftl_bitmap.o 00:03:02.355 CC lib/ftl/utils/ftl_property.o 00:03:02.355 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:02.355 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.355 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.355 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.355 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.355 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.355 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.355 CC lib/ftl/base/ftl_base_dev.o 00:03:02.355 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.355 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.355 CC lib/ftl/ftl_trace.o 00:03:02.925 LIB libspdk_nbd.a 00:03:02.925 SO libspdk_nbd.so.7.0 00:03:02.925 LIB libspdk_scsi.a 00:03:03.186 SYMLINK libspdk_nbd.so 00:03:03.186 SO libspdk_scsi.so.9.0 00:03:03.186 LIB libspdk_ublk.a 00:03:03.186 SO libspdk_ublk.so.3.0 00:03:03.186 SYMLINK libspdk_scsi.so 00:03:03.186 SYMLINK libspdk_ublk.so 00:03:03.446 LIB libspdk_ftl.a 00:03:03.446 CC lib/iscsi/conn.o 00:03:03.446 CC lib/iscsi/init_grp.o 00:03:03.446 CC lib/iscsi/iscsi.o 00:03:03.446 CC lib/iscsi/param.o 00:03:03.446 CC lib/vhost/vhost.o 00:03:03.446 CC lib/iscsi/portal_grp.o 00:03:03.446 CC lib/vhost/vhost_rpc.o 00:03:03.446 CC lib/iscsi/tgt_node.o 00:03:03.446 CC lib/vhost/vhost_scsi.o 00:03:03.446 CC lib/iscsi/iscsi_subsystem.o 00:03:03.446 CC lib/vhost/vhost_blk.o 00:03:03.446 CC lib/iscsi/iscsi_rpc.o 00:03:03.446 CC lib/vhost/rte_vhost_user.o 00:03:03.446 CC lib/iscsi/task.o 00:03:03.706 SO libspdk_ftl.so.9.0 00:03:03.967 SYMLINK libspdk_ftl.so 00:03:04.537 LIB libspdk_nvmf.a 00:03:04.537 SO libspdk_nvmf.so.20.0 00:03:04.537 LIB libspdk_vhost.a 00:03:04.537 SO libspdk_vhost.so.8.0 00:03:04.537 SYMLINK libspdk_nvmf.so 00:03:04.797 SYMLINK libspdk_vhost.so 00:03:04.797 LIB libspdk_iscsi.a 00:03:04.797 SO libspdk_iscsi.so.8.0 00:03:05.057 SYMLINK libspdk_iscsi.so 00:03:05.630 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.630 CC module/vfu_device/vfu_virtio.o 00:03:05.630 CC module/vfu_device/vfu_virtio_blk.o 00:03:05.630 CC module/vfu_device/vfu_virtio_scsi.o 00:03:05.630 CC module/vfu_device/vfu_virtio_rpc.o 00:03:05.630 CC module/vfu_device/vfu_virtio_fs.o 00:03:05.630 LIB libspdk_env_dpdk_rpc.a 00:03:05.891 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.891 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.891 CC module/accel/ioat/accel_ioat.o 00:03:05.891 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.891 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.891 CC module/sock/posix/posix.o 00:03:05.891 CC module/blob/bdev/blob_bdev.o 00:03:05.891 CC module/accel/error/accel_error.o 00:03:05.891 CC module/accel/error/accel_error_rpc.o 00:03:05.891 CC module/accel/dsa/accel_dsa.o 00:03:05.891 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.891 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.891 CC module/keyring/linux/keyring.o 00:03:05.891 CC module/keyring/file/keyring.o 00:03:05.891 CC module/keyring/linux/keyring_rpc.o 00:03:05.891 CC module/keyring/file/keyring_rpc.o 00:03:05.891 CC module/accel/iaa/accel_iaa.o 00:03:05.891 CC module/fsdev/aio/fsdev_aio.o 00:03:05.891 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:05.891 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.891 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.891 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.891 LIB libspdk_keyring_linux.a 00:03:05.891 LIB libspdk_scheduler_gscheduler.a 00:03:05.891 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.891 LIB libspdk_keyring_file.a 00:03:05.891 SO libspdk_keyring_linux.so.1.0 00:03:05.891 LIB libspdk_accel_error.a 00:03:05.891 SO libspdk_scheduler_gscheduler.so.4.0 00:03:06.152 LIB libspdk_accel_ioat.a 00:03:06.152 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:06.152 LIB libspdk_accel_iaa.a 00:03:06.152 SO libspdk_keyring_file.so.2.0 00:03:06.152 LIB libspdk_scheduler_dynamic.a 00:03:06.152 SO libspdk_accel_error.so.2.0 00:03:06.152 SO libspdk_accel_iaa.so.3.0 00:03:06.152 SO libspdk_accel_ioat.so.6.0 00:03:06.152 SYMLINK libspdk_keyring_linux.so 00:03:06.152 SYMLINK libspdk_scheduler_gscheduler.so 00:03:06.152 SO libspdk_scheduler_dynamic.so.4.0 00:03:06.152 LIB libspdk_accel_dsa.a 00:03:06.152 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:06.152 LIB libspdk_blob_bdev.a 00:03:06.152 SO libspdk_accel_dsa.so.5.0 00:03:06.152 SYMLINK libspdk_keyring_file.so 00:03:06.152 SYMLINK libspdk_accel_error.so 00:03:06.152 SYMLINK libspdk_accel_iaa.so 00:03:06.152 SYMLINK libspdk_accel_ioat.so 00:03:06.152 SO libspdk_blob_bdev.so.12.0 00:03:06.152 SYMLINK libspdk_scheduler_dynamic.so 00:03:06.152 LIB libspdk_vfu_device.a 00:03:06.152 SYMLINK libspdk_accel_dsa.so 00:03:06.152 SYMLINK libspdk_blob_bdev.so 00:03:06.152 SO libspdk_vfu_device.so.3.0 00:03:06.413 SYMLINK libspdk_vfu_device.so 00:03:06.413 LIB libspdk_fsdev_aio.a 00:03:06.413 SO libspdk_fsdev_aio.so.1.0 00:03:06.413 LIB libspdk_sock_posix.a 00:03:06.674 SO libspdk_sock_posix.so.6.0 00:03:06.674 SYMLINK libspdk_fsdev_aio.so 00:03:06.674 SYMLINK libspdk_sock_posix.so 00:03:06.674 CC module/bdev/lvol/vbdev_lvol.o 00:03:06.674 CC module/blobfs/bdev/blobfs_bdev.o 00:03:06.674 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:06.674 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:06.674 CC module/bdev/malloc/bdev_malloc.o 00:03:06.674 CC module/bdev/null/bdev_null.o 00:03:06.674 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.674 CC module/bdev/null/bdev_null_rpc.o 00:03:06.674 CC module/bdev/gpt/gpt.o 00:03:06.674 CC module/bdev/gpt/vbdev_gpt.o 00:03:06.674 CC module/bdev/delay/vbdev_delay.o 00:03:06.674 CC module/bdev/nvme/bdev_nvme.o 00:03:06.674 CC module/bdev/error/vbdev_error.o 00:03:06.674 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:06.935 CC module/bdev/raid/bdev_raid.o 00:03:06.935 CC module/bdev/error/vbdev_error_rpc.o 00:03:06.935 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.935 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.935 CC module/bdev/nvme/nvme_rpc.o 00:03:06.935 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.935 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.935 CC module/bdev/ftl/bdev_ftl.o 00:03:06.935 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.935 CC module/bdev/nvme/vbdev_opal.o 00:03:06.935 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.935 CC module/bdev/raid/raid1.o 00:03:06.935 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.935 CC module/bdev/raid/raid0.o 00:03:06.935 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.935 CC module/bdev/raid/concat.o 00:03:06.935 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.935 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.935 CC module/bdev/split/vbdev_split.o 00:03:06.935 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.935 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.935 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.935 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.935 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:06.935 CC module/bdev/aio/bdev_aio.o 00:03:06.935 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:06.935 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.935 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:06.935 LIB libspdk_blobfs_bdev.a 00:03:07.197 LIB libspdk_bdev_null.a 00:03:07.197 SO libspdk_blobfs_bdev.so.6.0 00:03:07.197 LIB libspdk_bdev_error.a 00:03:07.197 SO libspdk_bdev_null.so.6.0 00:03:07.197 LIB libspdk_bdev_gpt.a 00:03:07.197 SO libspdk_bdev_error.so.6.0 00:03:07.197 LIB libspdk_bdev_split.a 00:03:07.197 SYMLINK libspdk_blobfs_bdev.so 00:03:07.197 SO libspdk_bdev_gpt.so.6.0 00:03:07.197 LIB libspdk_bdev_ftl.a 00:03:07.197 SO libspdk_bdev_split.so.6.0 00:03:07.197 SYMLINK libspdk_bdev_null.so 00:03:07.197 SYMLINK libspdk_bdev_error.so 00:03:07.197 LIB libspdk_bdev_malloc.a 00:03:07.197 LIB libspdk_bdev_passthru.a 00:03:07.197 LIB libspdk_bdev_aio.a 00:03:07.197 SO libspdk_bdev_ftl.so.6.0 00:03:07.197 SO libspdk_bdev_malloc.so.6.0 00:03:07.197 LIB libspdk_bdev_zone_block.a 00:03:07.197 LIB libspdk_bdev_delay.a 00:03:07.197 SYMLINK libspdk_bdev_gpt.so 00:03:07.197 LIB libspdk_bdev_iscsi.a 00:03:07.197 SO libspdk_bdev_aio.so.6.0 00:03:07.197 SO libspdk_bdev_passthru.so.6.0 00:03:07.197 SYMLINK libspdk_bdev_split.so 00:03:07.197 SO libspdk_bdev_zone_block.so.6.0 00:03:07.197 SO libspdk_bdev_delay.so.6.0 00:03:07.197 SO libspdk_bdev_iscsi.so.6.0 00:03:07.197 SYMLINK libspdk_bdev_malloc.so 00:03:07.197 SYMLINK libspdk_bdev_ftl.so 00:03:07.197 SYMLINK libspdk_bdev_passthru.so 00:03:07.197 SYMLINK libspdk_bdev_aio.so 00:03:07.458 LIB libspdk_bdev_lvol.a 00:03:07.458 SYMLINK libspdk_bdev_zone_block.so 00:03:07.458 SYMLINK libspdk_bdev_delay.so 00:03:07.458 SYMLINK libspdk_bdev_iscsi.so 00:03:07.458 LIB libspdk_bdev_virtio.a 00:03:07.458 SO libspdk_bdev_lvol.so.6.0 00:03:07.458 SO libspdk_bdev_virtio.so.6.0 00:03:07.458 SYMLINK libspdk_bdev_lvol.so 00:03:07.458 SYMLINK libspdk_bdev_virtio.so 00:03:07.719 LIB libspdk_bdev_raid.a 00:03:07.719 SO libspdk_bdev_raid.so.6.0 00:03:07.979 SYMLINK libspdk_bdev_raid.so 00:03:09.363 LIB libspdk_bdev_nvme.a 00:03:09.363 SO libspdk_bdev_nvme.so.7.1 00:03:09.363 SYMLINK libspdk_bdev_nvme.so 00:03:10.308 CC module/event/subsystems/iobuf/iobuf.o 00:03:10.308 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:10.308 CC module/event/subsystems/vmd/vmd.o 00:03:10.308 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:10.308 CC module/event/subsystems/sock/sock.o 00:03:10.308 CC module/event/subsystems/keyring/keyring.o 00:03:10.308 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:10.308 CC module/event/subsystems/scheduler/scheduler.o 00:03:10.308 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:10.308 CC module/event/subsystems/fsdev/fsdev.o 00:03:10.308 LIB libspdk_event_vfu_tgt.a 00:03:10.308 LIB libspdk_event_sock.a 00:03:10.308 LIB libspdk_event_keyring.a 00:03:10.308 LIB libspdk_event_vmd.a 00:03:10.308 LIB libspdk_event_vhost_blk.a 00:03:10.308 LIB libspdk_event_fsdev.a 00:03:10.308 LIB libspdk_event_scheduler.a 00:03:10.308 LIB libspdk_event_iobuf.a 00:03:10.308 SO libspdk_event_sock.so.5.0 00:03:10.308 SO libspdk_event_keyring.so.1.0 00:03:10.308 SO libspdk_event_vfu_tgt.so.3.0 00:03:10.308 SO libspdk_event_vmd.so.6.0 00:03:10.308 SO libspdk_event_vhost_blk.so.3.0 00:03:10.308 SO libspdk_event_iobuf.so.3.0 00:03:10.308 SO libspdk_event_fsdev.so.1.0 00:03:10.308 SO libspdk_event_scheduler.so.4.0 00:03:10.308 SYMLINK libspdk_event_sock.so 00:03:10.308 SYMLINK libspdk_event_keyring.so 00:03:10.308 SYMLINK libspdk_event_vfu_tgt.so 00:03:10.308 SYMLINK libspdk_event_vmd.so 00:03:10.308 SYMLINK libspdk_event_vhost_blk.so 00:03:10.308 SYMLINK libspdk_event_fsdev.so 00:03:10.569 SYMLINK libspdk_event_iobuf.so 00:03:10.569 SYMLINK libspdk_event_scheduler.so 00:03:10.830 CC module/event/subsystems/accel/accel.o 00:03:10.830 LIB libspdk_event_accel.a 00:03:11.090 SO libspdk_event_accel.so.6.0 00:03:11.090 SYMLINK libspdk_event_accel.so 00:03:11.351 CC module/event/subsystems/bdev/bdev.o 00:03:11.612 LIB libspdk_event_bdev.a 00:03:11.612 SO libspdk_event_bdev.so.6.0 00:03:11.612 SYMLINK libspdk_event_bdev.so 00:03:12.186 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.186 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.186 CC module/event/subsystems/nbd/nbd.o 00:03:12.186 CC module/event/subsystems/scsi/scsi.o 00:03:12.186 CC module/event/subsystems/ublk/ublk.o 00:03:12.186 LIB libspdk_event_nbd.a 00:03:12.186 LIB libspdk_event_ublk.a 00:03:12.186 LIB libspdk_event_scsi.a 00:03:12.186 SO libspdk_event_nbd.so.6.0 00:03:12.186 SO libspdk_event_ublk.so.3.0 00:03:12.186 SO libspdk_event_scsi.so.6.0 00:03:12.186 LIB libspdk_event_nvmf.a 00:03:12.448 SYMLINK libspdk_event_nbd.so 00:03:12.448 SYMLINK libspdk_event_ublk.so 00:03:12.448 SO libspdk_event_nvmf.so.6.0 00:03:12.448 SYMLINK libspdk_event_scsi.so 00:03:12.448 SYMLINK libspdk_event_nvmf.so 00:03:12.710 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.710 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.972 LIB libspdk_event_vhost_scsi.a 00:03:12.972 LIB libspdk_event_iscsi.a 00:03:12.972 SO libspdk_event_vhost_scsi.so.3.0 00:03:12.972 SO libspdk_event_iscsi.so.6.0 00:03:12.972 SYMLINK libspdk_event_vhost_scsi.so 00:03:12.972 SYMLINK libspdk_event_iscsi.so 00:03:13.233 SO libspdk.so.6.0 00:03:13.233 SYMLINK libspdk.so 00:03:13.495 CXX app/trace/trace.o 00:03:13.495 CC app/trace_record/trace_record.o 00:03:13.495 TEST_HEADER include/spdk/accel.h 00:03:13.495 TEST_HEADER include/spdk/accel_module.h 00:03:13.495 TEST_HEADER include/spdk/assert.h 00:03:13.495 TEST_HEADER include/spdk/barrier.h 00:03:13.495 TEST_HEADER include/spdk/base64.h 00:03:13.495 TEST_HEADER include/spdk/bdev.h 00:03:13.495 CC app/spdk_top/spdk_top.o 00:03:13.495 TEST_HEADER include/spdk/bdev_module.h 00:03:13.495 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.495 TEST_HEADER include/spdk/bit_array.h 00:03:13.495 CC test/rpc_client/rpc_client_test.o 00:03:13.495 TEST_HEADER include/spdk/bit_pool.h 00:03:13.495 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.495 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.495 CC app/spdk_lspci/spdk_lspci.o 00:03:13.756 TEST_HEADER include/spdk/blobfs.h 00:03:13.756 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.756 TEST_HEADER include/spdk/blob.h 00:03:13.756 CC app/spdk_nvme_perf/perf.o 00:03:13.756 TEST_HEADER include/spdk/conf.h 00:03:13.756 TEST_HEADER include/spdk/cpuset.h 00:03:13.756 TEST_HEADER include/spdk/config.h 00:03:13.756 CC app/spdk_nvme_identify/identify.o 00:03:13.756 TEST_HEADER include/spdk/crc16.h 00:03:13.756 TEST_HEADER include/spdk/dif.h 00:03:13.756 TEST_HEADER include/spdk/crc32.h 00:03:13.756 TEST_HEADER include/spdk/crc64.h 00:03:13.756 TEST_HEADER include/spdk/dma.h 00:03:13.756 TEST_HEADER include/spdk/env.h 00:03:13.756 TEST_HEADER include/spdk/endian.h 00:03:13.756 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.756 TEST_HEADER include/spdk/event.h 00:03:13.756 TEST_HEADER include/spdk/fd_group.h 00:03:13.756 TEST_HEADER include/spdk/fd.h 00:03:13.756 TEST_HEADER include/spdk/file.h 00:03:13.756 TEST_HEADER include/spdk/fsdev_module.h 00:03:13.756 TEST_HEADER include/spdk/fsdev.h 00:03:13.756 TEST_HEADER include/spdk/ftl.h 00:03:13.756 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:13.756 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:13.756 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.756 TEST_HEADER include/spdk/hexlify.h 00:03:13.756 TEST_HEADER include/spdk/histogram_data.h 00:03:13.756 TEST_HEADER include/spdk/idxd.h 00:03:13.756 TEST_HEADER include/spdk/init.h 00:03:13.756 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.756 TEST_HEADER include/spdk/ioat.h 00:03:13.756 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.756 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.756 TEST_HEADER include/spdk/json.h 00:03:13.756 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.756 TEST_HEADER include/spdk/keyring.h 00:03:13.756 TEST_HEADER include/spdk/keyring_module.h 00:03:13.756 CC app/iscsi_tgt/iscsi_tgt.o 00:03:13.756 TEST_HEADER include/spdk/likely.h 00:03:13.756 TEST_HEADER include/spdk/lvol.h 00:03:13.756 TEST_HEADER include/spdk/log.h 00:03:13.756 CC app/nvmf_tgt/nvmf_main.o 00:03:13.756 TEST_HEADER include/spdk/md5.h 00:03:13.756 TEST_HEADER include/spdk/memory.h 00:03:13.756 TEST_HEADER include/spdk/nbd.h 00:03:13.756 TEST_HEADER include/spdk/mmio.h 00:03:13.756 TEST_HEADER include/spdk/net.h 00:03:13.756 TEST_HEADER include/spdk/notify.h 00:03:13.756 CC app/spdk_dd/spdk_dd.o 00:03:13.756 TEST_HEADER include/spdk/nvme.h 00:03:13.756 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.756 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.756 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.756 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.756 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.756 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.756 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.756 TEST_HEADER include/spdk/nvmf.h 00:03:13.756 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.756 TEST_HEADER include/spdk/opal_spec.h 00:03:13.756 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.756 TEST_HEADER include/spdk/opal.h 00:03:13.756 TEST_HEADER include/spdk/pci_ids.h 00:03:13.756 TEST_HEADER include/spdk/pipe.h 00:03:13.756 TEST_HEADER include/spdk/queue.h 00:03:13.756 TEST_HEADER include/spdk/reduce.h 00:03:13.757 TEST_HEADER include/spdk/rpc.h 00:03:13.757 CC app/spdk_tgt/spdk_tgt.o 00:03:13.757 TEST_HEADER include/spdk/scheduler.h 00:03:13.757 TEST_HEADER include/spdk/scsi.h 00:03:13.757 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.757 TEST_HEADER include/spdk/stdinc.h 00:03:13.757 TEST_HEADER include/spdk/sock.h 00:03:13.757 TEST_HEADER include/spdk/string.h 00:03:13.757 TEST_HEADER include/spdk/thread.h 00:03:13.757 TEST_HEADER include/spdk/trace.h 00:03:13.757 TEST_HEADER include/spdk/trace_parser.h 00:03:13.757 TEST_HEADER include/spdk/tree.h 00:03:13.757 TEST_HEADER include/spdk/util.h 00:03:13.757 TEST_HEADER include/spdk/ublk.h 00:03:13.757 TEST_HEADER include/spdk/version.h 00:03:13.757 TEST_HEADER include/spdk/uuid.h 00:03:13.757 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.757 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:13.757 TEST_HEADER include/spdk/vmd.h 00:03:13.757 TEST_HEADER include/spdk/vhost.h 00:03:13.757 TEST_HEADER include/spdk/xor.h 00:03:13.757 TEST_HEADER include/spdk/zipf.h 00:03:13.757 CXX test/cpp_headers/accel.o 00:03:13.757 CXX test/cpp_headers/accel_module.o 00:03:13.757 CXX test/cpp_headers/assert.o 00:03:13.757 CXX test/cpp_headers/barrier.o 00:03:13.757 CXX test/cpp_headers/base64.o 00:03:13.757 CXX test/cpp_headers/bdev.o 00:03:13.757 CXX test/cpp_headers/bdev_module.o 00:03:13.757 CXX test/cpp_headers/bdev_zone.o 00:03:13.757 CXX test/cpp_headers/bit_array.o 00:03:13.757 CXX test/cpp_headers/bit_pool.o 00:03:13.757 CXX test/cpp_headers/blob_bdev.o 00:03:13.757 CXX test/cpp_headers/blobfs_bdev.o 00:03:13.757 CXX test/cpp_headers/blobfs.o 00:03:13.757 CXX test/cpp_headers/conf.o 00:03:13.757 CXX test/cpp_headers/blob.o 00:03:13.757 CXX test/cpp_headers/config.o 00:03:13.757 CXX test/cpp_headers/cpuset.o 00:03:13.757 CXX test/cpp_headers/crc16.o 00:03:13.757 CXX test/cpp_headers/crc32.o 00:03:13.757 CXX test/cpp_headers/crc64.o 00:03:13.757 CXX test/cpp_headers/dif.o 00:03:13.757 CXX test/cpp_headers/dma.o 00:03:13.757 CXX test/cpp_headers/endian.o 00:03:13.757 CXX test/cpp_headers/event.o 00:03:13.757 CXX test/cpp_headers/env_dpdk.o 00:03:13.757 CXX test/cpp_headers/env.o 00:03:13.757 CXX test/cpp_headers/fd_group.o 00:03:13.757 CXX test/cpp_headers/fd.o 00:03:13.757 CXX test/cpp_headers/fsdev.o 00:03:13.757 CXX test/cpp_headers/fsdev_module.o 00:03:13.757 CXX test/cpp_headers/file.o 00:03:13.757 CXX test/cpp_headers/gpt_spec.o 00:03:13.757 CXX test/cpp_headers/ftl.o 00:03:13.757 CXX test/cpp_headers/histogram_data.o 00:03:13.757 CXX test/cpp_headers/idxd_spec.o 00:03:13.757 CXX test/cpp_headers/idxd.o 00:03:13.757 CXX test/cpp_headers/hexlify.o 00:03:13.757 CXX test/cpp_headers/fuse_dispatcher.o 00:03:13.757 CXX test/cpp_headers/init.o 00:03:13.757 CXX test/cpp_headers/ioat_spec.o 00:03:13.757 CXX test/cpp_headers/ioat.o 00:03:13.757 CXX test/cpp_headers/jsonrpc.o 00:03:13.757 CXX test/cpp_headers/json.o 00:03:13.757 CXX test/cpp_headers/iscsi_spec.o 00:03:13.757 CXX test/cpp_headers/keyring.o 00:03:13.757 CXX test/cpp_headers/keyring_module.o 00:03:13.757 CXX test/cpp_headers/likely.o 00:03:13.757 CXX test/cpp_headers/log.o 00:03:13.757 CXX test/cpp_headers/lvol.o 00:03:13.757 CXX test/cpp_headers/md5.o 00:03:13.757 CXX test/cpp_headers/mmio.o 00:03:13.757 CXX test/cpp_headers/memory.o 00:03:13.757 CXX test/cpp_headers/nbd.o 00:03:13.757 CXX test/cpp_headers/nvme_intel.o 00:03:13.757 CXX test/cpp_headers/net.o 00:03:13.757 CXX test/cpp_headers/notify.o 00:03:13.757 CXX test/cpp_headers/nvme_ocssd.o 00:03:13.757 CXX test/cpp_headers/nvme_spec.o 00:03:13.757 CXX test/cpp_headers/nvme.o 00:03:13.757 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:13.757 CXX test/cpp_headers/nvme_zns.o 00:03:13.757 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:13.757 CXX test/cpp_headers/nvmf_cmd.o 00:03:13.757 CXX test/cpp_headers/nvmf.o 00:03:13.757 CXX test/cpp_headers/nvmf_transport.o 00:03:13.757 CXX test/cpp_headers/nvmf_spec.o 00:03:13.757 CXX test/cpp_headers/opal.o 00:03:14.028 CXX test/cpp_headers/opal_spec.o 00:03:14.028 CXX test/cpp_headers/pci_ids.o 00:03:14.028 CXX test/cpp_headers/queue.o 00:03:14.028 CXX test/cpp_headers/reduce.o 00:03:14.028 CXX test/cpp_headers/scheduler.o 00:03:14.028 CXX test/cpp_headers/rpc.o 00:03:14.028 CXX test/cpp_headers/pipe.o 00:03:14.028 CXX test/cpp_headers/sock.o 00:03:14.028 CXX test/cpp_headers/scsi.o 00:03:14.028 CXX test/cpp_headers/string.o 00:03:14.028 CXX test/cpp_headers/scsi_spec.o 00:03:14.028 CXX test/cpp_headers/stdinc.o 00:03:14.028 CC test/thread/poller_perf/poller_perf.o 00:03:14.028 CXX test/cpp_headers/trace.o 00:03:14.028 CXX test/cpp_headers/thread.o 00:03:14.028 CXX test/cpp_headers/ublk.o 00:03:14.028 CXX test/cpp_headers/util.o 00:03:14.028 CXX test/cpp_headers/trace_parser.o 00:03:14.028 CXX test/cpp_headers/tree.o 00:03:14.028 CXX test/cpp_headers/uuid.o 00:03:14.028 CXX test/cpp_headers/version.o 00:03:14.028 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.028 CC examples/ioat/verify/verify.o 00:03:14.028 CXX test/cpp_headers/vhost.o 00:03:14.028 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.028 CXX test/cpp_headers/vmd.o 00:03:14.028 CC test/env/pci/pci_ut.o 00:03:14.028 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.028 CXX test/cpp_headers/xor.o 00:03:14.028 CC test/app/histogram_perf/histogram_perf.o 00:03:14.028 CXX test/cpp_headers/zipf.o 00:03:14.028 CC examples/util/zipf/zipf.o 00:03:14.028 CC examples/ioat/perf/perf.o 00:03:14.028 CC test/env/vtophys/vtophys.o 00:03:14.028 CC test/env/memory/memory_ut.o 00:03:14.028 CC test/app/jsoncat/jsoncat.o 00:03:14.028 CC test/app/stub/stub.o 00:03:14.028 LINK spdk_lspci 00:03:14.028 CC test/dma/test_dma/test_dma.o 00:03:14.028 CC test/app/bdev_svc/bdev_svc.o 00:03:14.028 CC app/fio/nvme/fio_plugin.o 00:03:14.301 LINK interrupt_tgt 00:03:14.301 CC app/fio/bdev/fio_plugin.o 00:03:14.301 LINK rpc_client_test 00:03:14.301 LINK spdk_nvme_discover 00:03:14.565 LINK nvmf_tgt 00:03:14.565 LINK iscsi_tgt 00:03:14.565 LINK spdk_trace_record 00:03:14.825 LINK jsoncat 00:03:14.825 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.825 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.825 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.825 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.825 LINK vtophys 00:03:14.825 LINK spdk_tgt 00:03:14.825 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.825 LINK ioat_perf 00:03:15.086 LINK env_dpdk_post_init 00:03:15.086 LINK spdk_dd 00:03:15.086 LINK poller_perf 00:03:15.086 LINK histogram_perf 00:03:15.086 LINK zipf 00:03:15.369 LINK verify 00:03:15.369 LINK stub 00:03:15.369 LINK bdev_svc 00:03:15.369 LINK spdk_trace 00:03:15.369 LINK spdk_top 00:03:15.629 LINK spdk_nvme_perf 00:03:15.629 LINK vhost_fuzz 00:03:15.629 LINK nvme_fuzz 00:03:15.629 LINK test_dma 00:03:15.629 LINK spdk_bdev 00:03:15.629 LINK pci_ut 00:03:15.629 CC examples/idxd/perf/perf.o 00:03:15.629 LINK mem_callbacks 00:03:15.629 LINK spdk_nvme 00:03:15.629 CC examples/vmd/led/led.o 00:03:15.629 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.629 CC examples/sock/hello_world/hello_sock.o 00:03:15.629 CC examples/thread/thread/thread_ex.o 00:03:15.629 CC test/event/event_perf/event_perf.o 00:03:15.629 CC test/event/reactor/reactor.o 00:03:15.629 CC test/event/reactor_perf/reactor_perf.o 00:03:15.629 CC app/vhost/vhost.o 00:03:15.890 CC test/event/app_repeat/app_repeat.o 00:03:15.890 CC test/event/scheduler/scheduler.o 00:03:15.890 LINK spdk_nvme_identify 00:03:15.890 LINK led 00:03:15.890 LINK lsvmd 00:03:15.890 LINK reactor 00:03:15.890 LINK reactor_perf 00:03:15.890 LINK event_perf 00:03:15.890 LINK app_repeat 00:03:15.890 LINK memory_ut 00:03:15.890 LINK hello_sock 00:03:15.890 LINK vhost 00:03:16.151 LINK idxd_perf 00:03:16.151 LINK thread 00:03:16.151 LINK scheduler 00:03:16.151 CC test/nvme/reserve/reserve.o 00:03:16.151 CC test/nvme/compliance/nvme_compliance.o 00:03:16.151 CC test/nvme/overhead/overhead.o 00:03:16.151 CC test/nvme/startup/startup.o 00:03:16.151 CC test/nvme/e2edp/nvme_dp.o 00:03:16.151 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.151 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.151 CC test/nvme/connect_stress/connect_stress.o 00:03:16.151 CC test/nvme/reset/reset.o 00:03:16.151 CC test/nvme/aer/aer.o 00:03:16.151 CC test/accel/dif/dif.o 00:03:16.151 CC test/nvme/sgl/sgl.o 00:03:16.151 CC test/nvme/err_injection/err_injection.o 00:03:16.151 CC test/nvme/fdp/fdp.o 00:03:16.151 CC test/nvme/boot_partition/boot_partition.o 00:03:16.151 CC test/nvme/simple_copy/simple_copy.o 00:03:16.151 CC test/nvme/cuse/cuse.o 00:03:16.151 CC test/blobfs/mkfs/mkfs.o 00:03:16.413 CC test/lvol/esnap/esnap.o 00:03:16.413 LINK startup 00:03:16.413 LINK boot_partition 00:03:16.413 LINK reserve 00:03:16.413 LINK connect_stress 00:03:16.413 LINK err_injection 00:03:16.413 LINK doorbell_aers 00:03:16.413 LINK mkfs 00:03:16.413 LINK fused_ordering 00:03:16.674 LINK simple_copy 00:03:16.674 CC examples/nvme/arbitration/arbitration.o 00:03:16.674 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.674 CC examples/nvme/hotplug/hotplug.o 00:03:16.674 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.674 CC examples/nvme/hello_world/hello_world.o 00:03:16.674 CC examples/nvme/reconnect/reconnect.o 00:03:16.674 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.674 CC examples/nvme/abort/abort.o 00:03:16.674 LINK reset 00:03:16.674 LINK sgl 00:03:16.674 LINK nvme_dp 00:03:16.674 LINK nvme_compliance 00:03:16.674 LINK overhead 00:03:16.674 LINK aer 00:03:16.674 LINK fdp 00:03:16.674 LINK iscsi_fuzz 00:03:16.674 CC examples/accel/perf/accel_perf.o 00:03:16.674 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:16.674 CC examples/blob/cli/blobcli.o 00:03:16.674 CC examples/blob/hello_world/hello_blob.o 00:03:16.674 LINK pmr_persistence 00:03:16.935 LINK cmb_copy 00:03:16.935 LINK hotplug 00:03:16.935 LINK hello_world 00:03:16.935 LINK arbitration 00:03:16.935 LINK reconnect 00:03:16.935 LINK dif 00:03:16.936 LINK abort 00:03:16.936 LINK nvme_manage 00:03:16.936 LINK hello_blob 00:03:17.197 LINK hello_fsdev 00:03:17.197 LINK accel_perf 00:03:17.197 LINK blobcli 00:03:17.457 LINK cuse 00:03:17.457 CC test/bdev/bdevio/bdevio.o 00:03:17.718 CC examples/bdev/bdevperf/bdevperf.o 00:03:17.718 CC examples/bdev/hello_world/hello_bdev.o 00:03:17.979 LINK bdevio 00:03:17.979 LINK hello_bdev 00:03:18.551 LINK bdevperf 00:03:19.122 CC examples/nvmf/nvmf/nvmf.o 00:03:19.382 LINK nvmf 00:03:21.295 LINK esnap 00:03:21.295 00:03:21.295 real 0m56.672s 00:03:21.295 user 8m9.320s 00:03:21.295 sys 6m12.572s 00:03:21.295 18:15:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.295 18:15:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.295 ************************************ 00:03:21.295 END TEST make 00:03:21.295 ************************************ 00:03:21.295 18:15:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.295 18:15:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.295 18:15:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.295 18:15:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.295 18:15:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.295 18:15:16 -- pm/common@44 -- $ pid=1805643 00:03:21.295 18:15:16 -- pm/common@50 -- $ kill -TERM 1805643 00:03:21.295 18:15:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.295 18:15:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.295 18:15:16 -- pm/common@44 -- $ pid=1805644 00:03:21.295 18:15:16 -- pm/common@50 -- $ kill -TERM 1805644 00:03:21.295 18:15:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.295 18:15:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:21.295 18:15:16 -- pm/common@44 -- $ pid=1805646 00:03:21.295 18:15:16 -- pm/common@50 -- $ kill -TERM 1805646 00:03:21.295 18:15:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.295 18:15:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:21.295 18:15:16 -- pm/common@44 -- $ pid=1805670 00:03:21.295 18:15:16 -- pm/common@50 -- $ sudo -E kill -TERM 1805670 00:03:21.295 18:15:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.295 18:15:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:21.557 18:15:16 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:21.557 18:15:16 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:21.557 18:15:16 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:21.557 18:15:16 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:21.557 18:15:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.557 18:15:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.557 18:15:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.557 18:15:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.557 18:15:16 -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.557 18:15:16 -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.557 18:15:16 -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.557 18:15:16 -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.557 18:15:16 -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.557 18:15:16 -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.557 18:15:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.557 18:15:16 -- scripts/common.sh@344 -- # case "$op" in 00:03:21.557 18:15:16 -- scripts/common.sh@345 -- # : 1 00:03:21.557 18:15:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.557 18:15:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.557 18:15:16 -- scripts/common.sh@365 -- # decimal 1 00:03:21.557 18:15:16 -- scripts/common.sh@353 -- # local d=1 00:03:21.557 18:15:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.557 18:15:16 -- scripts/common.sh@355 -- # echo 1 00:03:21.557 18:15:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.557 18:15:16 -- scripts/common.sh@366 -- # decimal 2 00:03:21.557 18:15:16 -- scripts/common.sh@353 -- # local d=2 00:03:21.557 18:15:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.557 18:15:16 -- scripts/common.sh@355 -- # echo 2 00:03:21.557 18:15:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.557 18:15:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.557 18:15:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.557 18:15:16 -- scripts/common.sh@368 -- # return 0 00:03:21.557 18:15:16 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.557 18:15:16 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.557 --rc genhtml_branch_coverage=1 00:03:21.557 --rc genhtml_function_coverage=1 00:03:21.557 --rc genhtml_legend=1 00:03:21.557 --rc geninfo_all_blocks=1 00:03:21.557 --rc geninfo_unexecuted_blocks=1 00:03:21.557 00:03:21.557 ' 00:03:21.557 18:15:16 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.557 --rc genhtml_branch_coverage=1 00:03:21.557 --rc genhtml_function_coverage=1 00:03:21.557 --rc genhtml_legend=1 00:03:21.557 --rc geninfo_all_blocks=1 00:03:21.557 --rc geninfo_unexecuted_blocks=1 00:03:21.557 00:03:21.557 ' 00:03:21.557 18:15:16 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.557 --rc genhtml_branch_coverage=1 00:03:21.557 --rc genhtml_function_coverage=1 00:03:21.557 --rc genhtml_legend=1 00:03:21.557 --rc geninfo_all_blocks=1 00:03:21.557 --rc geninfo_unexecuted_blocks=1 00:03:21.557 00:03:21.557 ' 00:03:21.557 18:15:16 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.557 --rc genhtml_branch_coverage=1 00:03:21.557 --rc genhtml_function_coverage=1 00:03:21.557 --rc genhtml_legend=1 00:03:21.557 --rc geninfo_all_blocks=1 00:03:21.557 --rc geninfo_unexecuted_blocks=1 00:03:21.557 00:03:21.557 ' 00:03:21.557 18:15:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:21.557 18:15:16 -- nvmf/common.sh@7 -- # uname -s 00:03:21.557 18:15:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.557 18:15:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.557 18:15:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.557 18:15:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.557 18:15:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.557 18:15:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.557 18:15:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.557 18:15:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.557 18:15:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.557 18:15:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.557 18:15:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:21.557 18:15:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:21.557 18:15:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.557 18:15:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.557 18:15:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:21.557 18:15:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.557 18:15:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:21.557 18:15:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:21.557 18:15:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.557 18:15:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.557 18:15:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.557 18:15:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.557 18:15:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.557 18:15:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.557 18:15:16 -- paths/export.sh@5 -- # export PATH 00:03:21.557 18:15:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.557 18:15:16 -- nvmf/common.sh@51 -- # : 0 00:03:21.557 18:15:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:21.557 18:15:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:21.557 18:15:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.557 18:15:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.557 18:15:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.557 18:15:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:21.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:21.557 18:15:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:21.557 18:15:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:21.557 18:15:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:21.557 18:15:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.557 18:15:16 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.557 18:15:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.557 18:15:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.557 18:15:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.557 18:15:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.557 18:15:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:21.557 18:15:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.557 18:15:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.557 18:15:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.557 18:15:16 -- spdk/autotest.sh@48 -- # udevadm_pid=1871783 00:03:21.557 18:15:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.557 18:15:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.557 18:15:16 -- pm/common@17 -- # local monitor 00:03:21.557 18:15:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.557 18:15:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.557 18:15:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.557 18:15:16 -- pm/common@21 -- # date +%s 00:03:21.557 18:15:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.557 18:15:16 -- pm/common@21 -- # date +%s 00:03:21.557 18:15:16 -- pm/common@25 -- # sleep 1 00:03:21.557 18:15:16 -- pm/common@21 -- # date +%s 00:03:21.557 18:15:16 -- pm/common@21 -- # date +%s 00:03:21.557 18:15:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733505316 00:03:21.557 18:15:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733505316 00:03:21.557 18:15:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733505316 00:03:21.557 18:15:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733505316 00:03:21.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733505316_collect-cpu-load.pm.log 00:03:21.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733505316_collect-vmstat.pm.log 00:03:21.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733505316_collect-cpu-temp.pm.log 00:03:21.818 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733505316_collect-bmc-pm.bmc.pm.log 00:03:22.761 18:15:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.761 18:15:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.761 18:15:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.761 18:15:17 -- common/autotest_common.sh@10 -- # set +x 00:03:22.761 18:15:17 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.761 18:15:17 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:22.761 18:15:17 -- common/autotest_common.sh@10 -- # set +x 00:03:22.762 18:15:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:22.762 18:15:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.762 18:15:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.762 18:15:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:22.762 18:15:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:22.762 18:15:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.762 18:15:17 -- common/autotest_common.sh@1457 -- # uname 00:03:22.762 18:15:17 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:22.762 18:15:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.762 18:15:17 -- common/autotest_common.sh@1477 -- # uname 00:03:22.762 18:15:17 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:22.762 18:15:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.762 18:15:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.762 lcov: LCOV version 1.15 00:03:22.762 18:15:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:37.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:55.839 18:15:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:55.839 18:15:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.839 18:15:47 -- common/autotest_common.sh@10 -- # set +x 00:03:55.839 18:15:47 -- spdk/autotest.sh@78 -- # rm -f 00:03:55.839 18:15:47 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.789 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:56.789 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:56.789 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:57.050 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:57.050 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:57.050 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:57.050 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:57.313 18:15:51 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:57.313 18:15:51 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:57.313 18:15:51 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:57.313 18:15:51 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:57.313 18:15:51 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:57.313 18:15:51 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:57.313 18:15:51 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:57.313 18:15:51 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:03:57.313 18:15:51 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:57.313 18:15:51 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:57.313 18:15:51 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:57.313 18:15:51 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:57.313 18:15:51 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:57.313 18:15:51 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:57.313 18:15:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:57.313 18:15:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:57.313 18:15:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:57.313 18:15:51 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:57.313 18:15:51 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:57.313 No valid GPT data, bailing 00:03:57.313 18:15:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:57.313 18:15:52 -- scripts/common.sh@394 -- # pt= 00:03:57.313 18:15:52 -- scripts/common.sh@395 -- # return 1 00:03:57.313 18:15:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:57.313 1+0 records in 00:03:57.313 1+0 records out 00:03:57.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453872 s, 231 MB/s 00:03:57.313 18:15:52 -- spdk/autotest.sh@105 -- # sync 00:03:57.313 18:15:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:57.313 18:15:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:57.313 18:15:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.336 18:16:00 -- spdk/autotest.sh@111 -- # uname -s 00:04:07.336 18:16:00 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:07.336 18:16:00 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:07.336 18:16:00 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:09.253 Hugepages 00:04:09.253 node hugesize free / total 00:04:09.253 node0 1048576kB 0 / 0 00:04:09.253 node0 2048kB 0 / 0 00:04:09.253 node1 1048576kB 0 / 0 00:04:09.253 node1 2048kB 0 / 0 00:04:09.253 00:04:09.253 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.253 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:09.253 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:09.253 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:09.253 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:09.253 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:09.253 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:09.514 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:09.514 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:09.514 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:09.514 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:09.514 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:09.514 18:16:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:09.514 18:16:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:09.514 18:16:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:09.514 18:16:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:13.722 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:13.722 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:15.107 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:15.368 18:16:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:16.309 18:16:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:16.309 18:16:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:16.309 18:16:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.309 18:16:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:16.309 18:16:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:16.309 18:16:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:16.309 18:16:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.309 18:16:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:16.309 18:16:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:16.309 18:16:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:16.309 18:16:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:16.309 18:16:11 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.613 Waiting for block devices as requested 00:04:19.874 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:19.875 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:19.875 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:20.136 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:20.136 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:20.136 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:20.397 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:20.397 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:20.397 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:20.658 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:20.658 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:20.918 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:20.918 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:20.918 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:21.178 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:21.178 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:21.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:21.439 18:16:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:21.439 18:16:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:21.439 18:16:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:21.439 18:16:16 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:21.439 18:16:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:21.439 18:16:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:21.439 18:16:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:21.700 18:16:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:21.700 18:16:16 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:21.700 18:16:16 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:21.700 18:16:16 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:21.700 18:16:16 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:21.700 18:16:16 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:21.700 18:16:16 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:21.700 18:16:16 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:21.700 18:16:16 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:21.700 18:16:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:21.700 18:16:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:21.700 18:16:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:21.700 18:16:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:21.700 18:16:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:21.700 18:16:16 -- common/autotest_common.sh@1543 -- # continue 00:04:21.700 18:16:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:21.700 18:16:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.700 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:04:21.700 18:16:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:21.701 18:16:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.701 18:16:16 -- common/autotest_common.sh@10 -- # set +x 00:04:21.701 18:16:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.006 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:25.006 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:25.006 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:25.006 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:25.268 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:25.842 18:16:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:25.842 18:16:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:25.842 18:16:20 -- common/autotest_common.sh@10 -- # set +x 00:04:25.842 18:16:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:25.842 18:16:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:25.842 18:16:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.842 18:16:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:25.842 18:16:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:25.842 18:16:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:25.842 18:16:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:25.842 18:16:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:25.842 18:16:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.842 18:16:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.842 18:16:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.842 18:16:20 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:25.842 18:16:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.842 18:16:20 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:25.842 18:16:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:25.842 18:16:20 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:25.842 18:16:20 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:25.842 18:16:20 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:25.842 18:16:20 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:25.842 18:16:20 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:25.842 18:16:20 -- common/autotest_common.sh@1572 -- # return 0 00:04:25.842 18:16:20 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:25.842 18:16:20 -- common/autotest_common.sh@1580 -- # return 0 00:04:25.842 18:16:20 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:25.842 18:16:20 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:25.842 18:16:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.842 18:16:20 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.842 18:16:20 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:25.842 18:16:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.842 18:16:20 -- common/autotest_common.sh@10 -- # set +x 00:04:25.842 18:16:20 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:25.842 18:16:20 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.842 18:16:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.842 18:16:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.842 18:16:20 -- common/autotest_common.sh@10 -- # set +x 00:04:25.842 ************************************ 00:04:25.842 START TEST env 00:04:25.842 ************************************ 00:04:25.842 18:16:20 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:26.104 * Looking for test storage... 00:04:26.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.104 18:16:20 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.104 18:16:20 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.104 18:16:20 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.104 18:16:20 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.104 18:16:20 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.104 18:16:20 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.104 18:16:20 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.104 18:16:20 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.104 18:16:20 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.104 18:16:20 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.104 18:16:20 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.104 18:16:20 env -- scripts/common.sh@344 -- # case "$op" in 00:04:26.104 18:16:20 env -- scripts/common.sh@345 -- # : 1 00:04:26.104 18:16:20 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.104 18:16:20 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.104 18:16:20 env -- scripts/common.sh@365 -- # decimal 1 00:04:26.104 18:16:20 env -- scripts/common.sh@353 -- # local d=1 00:04:26.104 18:16:20 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.104 18:16:20 env -- scripts/common.sh@355 -- # echo 1 00:04:26.104 18:16:20 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.104 18:16:20 env -- scripts/common.sh@366 -- # decimal 2 00:04:26.104 18:16:20 env -- scripts/common.sh@353 -- # local d=2 00:04:26.104 18:16:20 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.104 18:16:20 env -- scripts/common.sh@355 -- # echo 2 00:04:26.104 18:16:20 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.104 18:16:20 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.104 18:16:20 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.104 18:16:20 env -- scripts/common.sh@368 -- # return 0 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.104 --rc genhtml_branch_coverage=1 00:04:26.104 --rc genhtml_function_coverage=1 00:04:26.104 --rc genhtml_legend=1 00:04:26.104 --rc geninfo_all_blocks=1 00:04:26.104 --rc geninfo_unexecuted_blocks=1 00:04:26.104 00:04:26.104 ' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.104 --rc genhtml_branch_coverage=1 00:04:26.104 --rc genhtml_function_coverage=1 00:04:26.104 --rc genhtml_legend=1 00:04:26.104 --rc geninfo_all_blocks=1 00:04:26.104 --rc geninfo_unexecuted_blocks=1 00:04:26.104 00:04:26.104 ' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.104 --rc genhtml_branch_coverage=1 00:04:26.104 --rc genhtml_function_coverage=1 00:04:26.104 --rc genhtml_legend=1 00:04:26.104 --rc geninfo_all_blocks=1 00:04:26.104 --rc geninfo_unexecuted_blocks=1 00:04:26.104 00:04:26.104 ' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.104 --rc genhtml_branch_coverage=1 00:04:26.104 --rc genhtml_function_coverage=1 00:04:26.104 --rc genhtml_legend=1 00:04:26.104 --rc geninfo_all_blocks=1 00:04:26.104 --rc geninfo_unexecuted_blocks=1 00:04:26.104 00:04:26.104 ' 00:04:26.104 18:16:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.104 18:16:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.104 18:16:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.104 ************************************ 00:04:26.104 START TEST env_memory 00:04:26.104 ************************************ 00:04:26.104 18:16:20 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:26.104 00:04:26.104 00:04:26.104 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.104 http://cunit.sourceforge.net/ 00:04:26.104 00:04:26.104 00:04:26.104 Suite: memory 00:04:26.104 Test: alloc and free memory map ...[2024-12-06 18:16:20.840424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:26.104 passed 00:04:26.104 Test: mem map translation ...[2024-12-06 18:16:20.865982] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:26.104 [2024-12-06 18:16:20.866010] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:26.104 [2024-12-06 18:16:20.866056] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:26.104 [2024-12-06 18:16:20.866068] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:26.365 passed 00:04:26.365 Test: mem map registration ...[2024-12-06 18:16:20.925548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:26.366 [2024-12-06 18:16:20.925569] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:26.366 passed 00:04:26.366 Test: mem map adjacent registrations ...passed 00:04:26.366 00:04:26.366 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.366 suites 1 1 n/a 0 0 00:04:26.366 tests 4 4 4 0 0 00:04:26.366 asserts 152 152 152 0 n/a 00:04:26.366 00:04:26.366 Elapsed time = 0.199 seconds 00:04:26.366 00:04:26.366 real 0m0.214s 00:04:26.366 user 0m0.202s 00:04:26.366 sys 0m0.011s 00:04:26.366 18:16:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.366 18:16:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:26.366 ************************************ 00:04:26.366 END TEST env_memory 00:04:26.366 ************************************ 00:04:26.366 18:16:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.366 18:16:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.366 18:16:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.366 18:16:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.366 ************************************ 00:04:26.366 START TEST env_vtophys 00:04:26.366 ************************************ 00:04:26.366 18:16:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:26.366 EAL: lib.eal log level changed from notice to debug 00:04:26.366 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.366 EAL: Detected lcore 1 as core 1 on socket 0 00:04:26.366 EAL: Detected lcore 2 as core 2 on socket 0 00:04:26.366 EAL: Detected lcore 3 as core 3 on socket 0 00:04:26.366 EAL: Detected lcore 4 as core 4 on socket 0 00:04:26.366 EAL: Detected lcore 5 as core 5 on socket 0 00:04:26.366 EAL: Detected lcore 6 as core 6 on socket 0 00:04:26.366 EAL: Detected lcore 7 as core 7 on socket 0 00:04:26.366 EAL: Detected lcore 8 as core 8 on socket 0 00:04:26.366 EAL: Detected lcore 9 as core 9 on socket 0 00:04:26.366 EAL: Detected lcore 10 as core 10 on socket 0 00:04:26.366 EAL: Detected lcore 11 as core 11 on socket 0 00:04:26.366 EAL: Detected lcore 12 as core 12 on socket 0 00:04:26.366 EAL: Detected lcore 13 as core 13 on socket 0 00:04:26.366 EAL: Detected lcore 14 as core 14 on socket 0 00:04:26.366 EAL: Detected lcore 15 as core 15 on socket 0 00:04:26.366 EAL: Detected lcore 16 as core 16 on socket 0 00:04:26.366 EAL: Detected lcore 17 as core 17 on socket 0 00:04:26.366 EAL: Detected lcore 18 as core 18 on socket 0 00:04:26.366 EAL: Detected lcore 19 as core 19 on socket 0 00:04:26.366 EAL: Detected lcore 20 as core 20 on socket 0 00:04:26.366 EAL: Detected lcore 21 as core 21 on socket 0 00:04:26.366 EAL: Detected lcore 22 as core 22 on socket 0 00:04:26.366 EAL: Detected lcore 23 as core 23 on socket 0 00:04:26.366 EAL: Detected lcore 24 as core 24 on socket 0 00:04:26.366 EAL: Detected lcore 25 as core 25 on socket 0 00:04:26.366 EAL: Detected lcore 26 as core 26 on socket 0 00:04:26.366 EAL: Detected lcore 27 as core 27 on socket 0 00:04:26.366 EAL: Detected lcore 28 as core 28 on socket 0 00:04:26.366 EAL: Detected lcore 29 as core 29 on socket 0 00:04:26.366 EAL: Detected lcore 30 as core 30 on socket 0 00:04:26.366 EAL: Detected lcore 31 as core 31 on socket 0 00:04:26.366 EAL: Detected lcore 32 as core 32 on socket 0 00:04:26.366 EAL: Detected lcore 33 as core 33 on socket 0 00:04:26.366 EAL: Detected lcore 34 as core 34 on socket 0 00:04:26.366 EAL: Detected lcore 35 as core 35 on socket 0 00:04:26.366 EAL: Detected lcore 36 as core 0 on socket 1 00:04:26.366 EAL: Detected lcore 37 as core 1 on socket 1 00:04:26.366 EAL: Detected lcore 38 as core 2 on socket 1 00:04:26.366 EAL: Detected lcore 39 as core 3 on socket 1 00:04:26.366 EAL: Detected lcore 40 as core 4 on socket 1 00:04:26.366 EAL: Detected lcore 41 as core 5 on socket 1 00:04:26.366 EAL: Detected lcore 42 as core 6 on socket 1 00:04:26.366 EAL: Detected lcore 43 as core 7 on socket 1 00:04:26.366 EAL: Detected lcore 44 as core 8 on socket 1 00:04:26.366 EAL: Detected lcore 45 as core 9 on socket 1 00:04:26.366 EAL: Detected lcore 46 as core 10 on socket 1 00:04:26.366 EAL: Detected lcore 47 as core 11 on socket 1 00:04:26.366 EAL: Detected lcore 48 as core 12 on socket 1 00:04:26.366 EAL: Detected lcore 49 as core 13 on socket 1 00:04:26.366 EAL: Detected lcore 50 as core 14 on socket 1 00:04:26.366 EAL: Detected lcore 51 as core 15 on socket 1 00:04:26.366 EAL: Detected lcore 52 as core 16 on socket 1 00:04:26.366 EAL: Detected lcore 53 as core 17 on socket 1 00:04:26.366 EAL: Detected lcore 54 as core 18 on socket 1 00:04:26.366 EAL: Detected lcore 55 as core 19 on socket 1 00:04:26.366 EAL: Detected lcore 56 as core 20 on socket 1 00:04:26.366 EAL: Detected lcore 57 as core 21 on socket 1 00:04:26.366 EAL: Detected lcore 58 as core 22 on socket 1 00:04:26.366 EAL: Detected lcore 59 as core 23 on socket 1 00:04:26.366 EAL: Detected lcore 60 as core 24 on socket 1 00:04:26.366 EAL: Detected lcore 61 as core 25 on socket 1 00:04:26.366 EAL: Detected lcore 62 as core 26 on socket 1 00:04:26.366 EAL: Detected lcore 63 as core 27 on socket 1 00:04:26.366 EAL: Detected lcore 64 as core 28 on socket 1 00:04:26.366 EAL: Detected lcore 65 as core 29 on socket 1 00:04:26.366 EAL: Detected lcore 66 as core 30 on socket 1 00:04:26.366 EAL: Detected lcore 67 as core 31 on socket 1 00:04:26.366 EAL: Detected lcore 68 as core 32 on socket 1 00:04:26.366 EAL: Detected lcore 69 as core 33 on socket 1 00:04:26.366 EAL: Detected lcore 70 as core 34 on socket 1 00:04:26.366 EAL: Detected lcore 71 as core 35 on socket 1 00:04:26.366 EAL: Detected lcore 72 as core 0 on socket 0 00:04:26.366 EAL: Detected lcore 73 as core 1 on socket 0 00:04:26.366 EAL: Detected lcore 74 as core 2 on socket 0 00:04:26.366 EAL: Detected lcore 75 as core 3 on socket 0 00:04:26.366 EAL: Detected lcore 76 as core 4 on socket 0 00:04:26.366 EAL: Detected lcore 77 as core 5 on socket 0 00:04:26.366 EAL: Detected lcore 78 as core 6 on socket 0 00:04:26.366 EAL: Detected lcore 79 as core 7 on socket 0 00:04:26.366 EAL: Detected lcore 80 as core 8 on socket 0 00:04:26.366 EAL: Detected lcore 81 as core 9 on socket 0 00:04:26.366 EAL: Detected lcore 82 as core 10 on socket 0 00:04:26.366 EAL: Detected lcore 83 as core 11 on socket 0 00:04:26.366 EAL: Detected lcore 84 as core 12 on socket 0 00:04:26.366 EAL: Detected lcore 85 as core 13 on socket 0 00:04:26.366 EAL: Detected lcore 86 as core 14 on socket 0 00:04:26.366 EAL: Detected lcore 87 as core 15 on socket 0 00:04:26.366 EAL: Detected lcore 88 as core 16 on socket 0 00:04:26.366 EAL: Detected lcore 89 as core 17 on socket 0 00:04:26.366 EAL: Detected lcore 90 as core 18 on socket 0 00:04:26.366 EAL: Detected lcore 91 as core 19 on socket 0 00:04:26.366 EAL: Detected lcore 92 as core 20 on socket 0 00:04:26.366 EAL: Detected lcore 93 as core 21 on socket 0 00:04:26.366 EAL: Detected lcore 94 as core 22 on socket 0 00:04:26.366 EAL: Detected lcore 95 as core 23 on socket 0 00:04:26.366 EAL: Detected lcore 96 as core 24 on socket 0 00:04:26.366 EAL: Detected lcore 97 as core 25 on socket 0 00:04:26.366 EAL: Detected lcore 98 as core 26 on socket 0 00:04:26.366 EAL: Detected lcore 99 as core 27 on socket 0 00:04:26.366 EAL: Detected lcore 100 as core 28 on socket 0 00:04:26.366 EAL: Detected lcore 101 as core 29 on socket 0 00:04:26.366 EAL: Detected lcore 102 as core 30 on socket 0 00:04:26.366 EAL: Detected lcore 103 as core 31 on socket 0 00:04:26.366 EAL: Detected lcore 104 as core 32 on socket 0 00:04:26.366 EAL: Detected lcore 105 as core 33 on socket 0 00:04:26.366 EAL: Detected lcore 106 as core 34 on socket 0 00:04:26.366 EAL: Detected lcore 107 as core 35 on socket 0 00:04:26.366 EAL: Detected lcore 108 as core 0 on socket 1 00:04:26.366 EAL: Detected lcore 109 as core 1 on socket 1 00:04:26.366 EAL: Detected lcore 110 as core 2 on socket 1 00:04:26.366 EAL: Detected lcore 111 as core 3 on socket 1 00:04:26.366 EAL: Detected lcore 112 as core 4 on socket 1 00:04:26.366 EAL: Detected lcore 113 as core 5 on socket 1 00:04:26.366 EAL: Detected lcore 114 as core 6 on socket 1 00:04:26.366 EAL: Detected lcore 115 as core 7 on socket 1 00:04:26.366 EAL: Detected lcore 116 as core 8 on socket 1 00:04:26.366 EAL: Detected lcore 117 as core 9 on socket 1 00:04:26.366 EAL: Detected lcore 118 as core 10 on socket 1 00:04:26.366 EAL: Detected lcore 119 as core 11 on socket 1 00:04:26.366 EAL: Detected lcore 120 as core 12 on socket 1 00:04:26.366 EAL: Detected lcore 121 as core 13 on socket 1 00:04:26.366 EAL: Detected lcore 122 as core 14 on socket 1 00:04:26.366 EAL: Detected lcore 123 as core 15 on socket 1 00:04:26.366 EAL: Detected lcore 124 as core 16 on socket 1 00:04:26.366 EAL: Detected lcore 125 as core 17 on socket 1 00:04:26.366 EAL: Detected lcore 126 as core 18 on socket 1 00:04:26.366 EAL: Detected lcore 127 as core 19 on socket 1 00:04:26.366 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:26.366 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:26.366 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:26.366 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:26.366 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:26.366 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:26.366 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:26.366 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:26.366 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:26.366 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:26.366 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:26.366 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:26.366 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:26.366 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:26.366 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:26.366 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:26.366 EAL: Maximum logical cores by configuration: 128 00:04:26.366 EAL: Detected CPU lcores: 128 00:04:26.366 EAL: Detected NUMA nodes: 2 00:04:26.366 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:26.366 EAL: Detected shared linkage of DPDK 00:04:26.366 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.366 EAL: Bus pci wants IOVA as 'DC' 00:04:26.366 EAL: Buses did not request a specific IOVA mode. 00:04:26.366 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:26.366 EAL: Selected IOVA mode 'VA' 00:04:26.366 EAL: Probing VFIO support... 00:04:26.366 EAL: IOMMU type 1 (Type 1) is supported 00:04:26.366 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:26.366 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:26.366 EAL: VFIO support initialized 00:04:26.367 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.367 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.367 EAL: Setting up physically contiguous memory... 00:04:26.367 EAL: Setting maximum number of open files to 524288 00:04:26.367 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.367 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:26.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:26.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.367 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:26.367 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:26.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.367 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:26.367 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:26.367 EAL: Hugepages will be freed exactly as allocated. 00:04:26.367 EAL: No shared files mode enabled, IPC is disabled 00:04:26.367 EAL: No shared files mode enabled, IPC is disabled 00:04:26.367 EAL: TSC frequency is ~2400000 KHz 00:04:26.367 EAL: Main lcore 0 is ready (tid=7f604a090a00;cpuset=[0]) 00:04:26.367 EAL: Trying to obtain current memory policy. 00:04:26.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.367 EAL: Restoring previous memory policy: 0 00:04:26.367 EAL: request: mp_malloc_sync 00:04:26.367 EAL: No shared files mode enabled, IPC is disabled 00:04:26.367 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.367 EAL: No shared files mode enabled, IPC is disabled 00:04:26.627 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.627 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.627 00:04:26.627 00:04:26.627 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.627 http://cunit.sourceforge.net/ 00:04:26.627 00:04:26.627 00:04:26.627 Suite: components_suite 00:04:26.627 Test: vtophys_malloc_test ...passed 00:04:26.627 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.627 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.627 EAL: Restoring previous memory policy: 4 00:04:26.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.627 EAL: request: mp_malloc_sync 00:04:26.627 EAL: No shared files mode enabled, IPC is disabled 00:04:26.627 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.627 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.627 EAL: request: mp_malloc_sync 00:04:26.627 EAL: No shared files mode enabled, IPC is disabled 00:04:26.627 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.627 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.628 EAL: Restoring previous memory policy: 4 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.628 EAL: request: mp_malloc_sync 00:04:26.628 EAL: No shared files mode enabled, IPC is disabled 00:04:26.628 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.628 EAL: Trying to obtain current memory policy. 00:04:26.628 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.887 EAL: Restoring previous memory policy: 4 00:04:26.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.887 EAL: request: mp_malloc_sync 00:04:26.887 EAL: No shared files mode enabled, IPC is disabled 00:04:26.887 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.887 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.887 EAL: request: mp_malloc_sync 00:04:26.887 EAL: No shared files mode enabled, IPC is disabled 00:04:26.887 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.887 EAL: Trying to obtain current memory policy. 00:04:26.887 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.147 EAL: Restoring previous memory policy: 4 00:04:27.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.147 EAL: request: mp_malloc_sync 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.147 EAL: request: mp_malloc_sync 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.147 passed 00:04:27.147 00:04:27.147 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.147 suites 1 1 n/a 0 0 00:04:27.147 tests 2 2 2 0 0 00:04:27.147 asserts 497 497 497 0 n/a 00:04:27.147 00:04:27.147 Elapsed time = 0.688 seconds 00:04:27.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.147 EAL: request: mp_malloc_sync 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 EAL: No shared files mode enabled, IPC is disabled 00:04:27.147 00:04:27.147 real 0m0.837s 00:04:27.147 user 0m0.439s 00:04:27.147 sys 0m0.374s 00:04:27.147 18:16:21 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.147 18:16:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.147 ************************************ 00:04:27.147 END TEST env_vtophys 00:04:27.147 ************************************ 00:04:27.408 18:16:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.408 18:16:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.408 18:16:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.408 18:16:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.408 ************************************ 00:04:27.408 START TEST env_pci 00:04:27.408 ************************************ 00:04:27.408 18:16:21 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.408 00:04:27.408 00:04:27.408 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.408 http://cunit.sourceforge.net/ 00:04:27.408 00:04:27.408 00:04:27.408 Suite: pci 00:04:27.408 Test: pci_hook ...[2024-12-06 18:16:22.013761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1891138 has claimed it 00:04:27.408 EAL: Cannot find device (10000:00:01.0) 00:04:27.408 EAL: Failed to attach device on primary process 00:04:27.408 passed 00:04:27.408 00:04:27.408 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.408 suites 1 1 n/a 0 0 00:04:27.408 tests 1 1 1 0 0 00:04:27.408 asserts 25 25 25 0 n/a 00:04:27.408 00:04:27.408 Elapsed time = 0.031 seconds 00:04:27.408 00:04:27.408 real 0m0.053s 00:04:27.408 user 0m0.018s 00:04:27.408 sys 0m0.034s 00:04:27.408 18:16:22 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.408 18:16:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.408 ************************************ 00:04:27.408 END TEST env_pci 00:04:27.408 ************************************ 00:04:27.408 18:16:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.408 18:16:22 env -- env/env.sh@15 -- # uname 00:04:27.408 18:16:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.408 18:16:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.408 18:16:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.408 18:16:22 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:27.408 18:16:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.408 18:16:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.408 ************************************ 00:04:27.408 START TEST env_dpdk_post_init 00:04:27.408 ************************************ 00:04:27.408 18:16:22 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.408 EAL: Detected CPU lcores: 128 00:04:27.408 EAL: Detected NUMA nodes: 2 00:04:27.408 EAL: Detected shared linkage of DPDK 00:04:27.408 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.668 EAL: Selected IOVA mode 'VA' 00:04:27.668 EAL: VFIO support initialized 00:04:27.668 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.668 EAL: Using IOMMU type 1 (Type 1) 00:04:27.668 EAL: Ignore mapping IO port bar(1) 00:04:27.928 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:27.928 EAL: Ignore mapping IO port bar(1) 00:04:28.188 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:28.188 EAL: Ignore mapping IO port bar(1) 00:04:28.188 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:28.448 EAL: Ignore mapping IO port bar(1) 00:04:28.448 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:28.707 EAL: Ignore mapping IO port bar(1) 00:04:28.707 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:28.967 EAL: Ignore mapping IO port bar(1) 00:04:28.967 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:29.226 EAL: Ignore mapping IO port bar(1) 00:04:29.226 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:29.226 EAL: Ignore mapping IO port bar(1) 00:04:29.486 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:29.746 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:29.746 EAL: Ignore mapping IO port bar(1) 00:04:30.006 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:30.006 EAL: Ignore mapping IO port bar(1) 00:04:30.006 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:30.267 EAL: Ignore mapping IO port bar(1) 00:04:30.267 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:30.527 EAL: Ignore mapping IO port bar(1) 00:04:30.527 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:30.786 EAL: Ignore mapping IO port bar(1) 00:04:30.786 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:30.786 EAL: Ignore mapping IO port bar(1) 00:04:31.046 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:31.046 EAL: Ignore mapping IO port bar(1) 00:04:31.306 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:31.306 EAL: Ignore mapping IO port bar(1) 00:04:31.566 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:31.566 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:31.566 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:31.566 Starting DPDK initialization... 00:04:31.566 Starting SPDK post initialization... 00:04:31.566 SPDK NVMe probe 00:04:31.566 Attaching to 0000:65:00.0 00:04:31.566 Attached to 0000:65:00.0 00:04:31.566 Cleaning up... 00:04:33.539 00:04:33.539 real 0m5.756s 00:04:33.539 user 0m0.106s 00:04:33.539 sys 0m0.206s 00:04:33.539 18:16:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.539 18:16:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 END TEST env_dpdk_post_init 00:04:33.539 ************************************ 00:04:33.539 18:16:27 env -- env/env.sh@26 -- # uname 00:04:33.539 18:16:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.539 18:16:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.539 18:16:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.539 18:16:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.539 18:16:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 START TEST env_mem_callbacks 00:04:33.539 ************************************ 00:04:33.539 18:16:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.539 EAL: Detected CPU lcores: 128 00:04:33.539 EAL: Detected NUMA nodes: 2 00:04:33.539 EAL: Detected shared linkage of DPDK 00:04:33.539 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.539 EAL: Selected IOVA mode 'VA' 00:04:33.539 EAL: VFIO support initialized 00:04:33.539 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.539 00:04:33.539 00:04:33.539 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.539 http://cunit.sourceforge.net/ 00:04:33.539 00:04:33.539 00:04:33.539 Suite: memory 00:04:33.539 Test: test ... 00:04:33.539 register 0x200000200000 2097152 00:04:33.539 malloc 3145728 00:04:33.539 register 0x200000400000 4194304 00:04:33.539 buf 0x200000500000 len 3145728 PASSED 00:04:33.539 malloc 64 00:04:33.539 buf 0x2000004fff40 len 64 PASSED 00:04:33.539 malloc 4194304 00:04:33.539 register 0x200000800000 6291456 00:04:33.539 buf 0x200000a00000 len 4194304 PASSED 00:04:33.539 free 0x200000500000 3145728 00:04:33.539 free 0x2000004fff40 64 00:04:33.539 unregister 0x200000400000 4194304 PASSED 00:04:33.539 free 0x200000a00000 4194304 00:04:33.539 unregister 0x200000800000 6291456 PASSED 00:04:33.539 malloc 8388608 00:04:33.539 register 0x200000400000 10485760 00:04:33.539 buf 0x200000600000 len 8388608 PASSED 00:04:33.539 free 0x200000600000 8388608 00:04:33.539 unregister 0x200000400000 10485760 PASSED 00:04:33.539 passed 00:04:33.539 00:04:33.539 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.539 suites 1 1 n/a 0 0 00:04:33.539 tests 1 1 1 0 0 00:04:33.539 asserts 15 15 15 0 n/a 00:04:33.539 00:04:33.539 Elapsed time = 0.010 seconds 00:04:33.539 00:04:33.539 real 0m0.067s 00:04:33.539 user 0m0.021s 00:04:33.539 sys 0m0.046s 00:04:33.539 18:16:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.539 18:16:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 END TEST env_mem_callbacks 00:04:33.539 ************************************ 00:04:33.539 00:04:33.539 real 0m7.544s 00:04:33.539 user 0m1.036s 00:04:33.539 sys 0m1.072s 00:04:33.539 18:16:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.539 18:16:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 END TEST env 00:04:33.539 ************************************ 00:04:33.539 18:16:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:33.539 18:16:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.539 18:16:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.539 18:16:28 -- common/autotest_common.sh@10 -- # set +x 00:04:33.539 ************************************ 00:04:33.539 START TEST rpc 00:04:33.539 ************************************ 00:04:33.539 18:16:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:33.539 * Looking for test storage... 00:04:33.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.539 18:16:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.539 18:16:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.539 18:16:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.801 18:16:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.801 18:16:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.801 18:16:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.801 18:16:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.801 18:16:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.801 18:16:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.801 18:16:28 rpc -- scripts/common.sh@345 -- # : 1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.801 18:16:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.801 18:16:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.801 18:16:28 rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.801 18:16:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.801 18:16:28 rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.801 18:16:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.801 18:16:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.801 18:16:28 rpc -- scripts/common.sh@368 -- # return 0 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.801 --rc genhtml_branch_coverage=1 00:04:33.801 --rc genhtml_function_coverage=1 00:04:33.801 --rc genhtml_legend=1 00:04:33.801 --rc geninfo_all_blocks=1 00:04:33.801 --rc geninfo_unexecuted_blocks=1 00:04:33.801 00:04:33.801 ' 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.801 --rc genhtml_branch_coverage=1 00:04:33.801 --rc genhtml_function_coverage=1 00:04:33.801 --rc genhtml_legend=1 00:04:33.801 --rc geninfo_all_blocks=1 00:04:33.801 --rc geninfo_unexecuted_blocks=1 00:04:33.801 00:04:33.801 ' 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.801 --rc genhtml_branch_coverage=1 00:04:33.801 --rc genhtml_function_coverage=1 00:04:33.801 --rc genhtml_legend=1 00:04:33.801 --rc geninfo_all_blocks=1 00:04:33.801 --rc geninfo_unexecuted_blocks=1 00:04:33.801 00:04:33.801 ' 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.801 --rc genhtml_branch_coverage=1 00:04:33.801 --rc genhtml_function_coverage=1 00:04:33.801 --rc genhtml_legend=1 00:04:33.801 --rc geninfo_all_blocks=1 00:04:33.801 --rc geninfo_unexecuted_blocks=1 00:04:33.801 00:04:33.801 ' 00:04:33.801 18:16:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:33.801 18:16:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1892520 00:04:33.801 18:16:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.801 18:16:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1892520 00:04:33.801 18:16:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 1892520 ']' 00:04:33.802 18:16:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.802 18:16:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.802 18:16:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.802 18:16:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.802 18:16:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.802 [2024-12-06 18:16:28.418057] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:33.802 [2024-12-06 18:16:28.418122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892520 ] 00:04:33.802 [2024-12-06 18:16:28.510897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.802 [2024-12-06 18:16:28.563013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.802 [2024-12-06 18:16:28.563071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1892520' to capture a snapshot of events at runtime. 00:04:33.802 [2024-12-06 18:16:28.563080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.802 [2024-12-06 18:16:28.563088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.802 [2024-12-06 18:16:28.563095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1892520 for offline analysis/debug. 00:04:33.802 [2024-12-06 18:16:28.563879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.744 18:16:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.744 18:16:29 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.744 18:16:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.744 18:16:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:34.744 18:16:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.744 18:16:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.744 18:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.744 18:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.744 18:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 ************************************ 00:04:34.744 START TEST rpc_integrity 00:04:34.744 ************************************ 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.744 { 00:04:34.744 "name": "Malloc0", 00:04:34.744 "aliases": [ 00:04:34.744 "0966474f-1031-4a14-9376-8e1a49ba23a4" 00:04:34.744 ], 00:04:34.744 "product_name": "Malloc disk", 00:04:34.744 "block_size": 512, 00:04:34.744 "num_blocks": 16384, 00:04:34.744 "uuid": "0966474f-1031-4a14-9376-8e1a49ba23a4", 00:04:34.744 "assigned_rate_limits": { 00:04:34.744 "rw_ios_per_sec": 0, 00:04:34.744 "rw_mbytes_per_sec": 0, 00:04:34.744 "r_mbytes_per_sec": 0, 00:04:34.744 "w_mbytes_per_sec": 0 00:04:34.744 }, 00:04:34.744 "claimed": false, 00:04:34.744 "zoned": false, 00:04:34.744 "supported_io_types": { 00:04:34.744 "read": true, 00:04:34.744 "write": true, 00:04:34.744 "unmap": true, 00:04:34.744 "flush": true, 00:04:34.744 "reset": true, 00:04:34.744 "nvme_admin": false, 00:04:34.744 "nvme_io": false, 00:04:34.744 "nvme_io_md": false, 00:04:34.744 "write_zeroes": true, 00:04:34.744 "zcopy": true, 00:04:34.744 "get_zone_info": false, 00:04:34.744 "zone_management": false, 00:04:34.744 "zone_append": false, 00:04:34.744 "compare": false, 00:04:34.744 "compare_and_write": false, 00:04:34.744 "abort": true, 00:04:34.744 "seek_hole": false, 00:04:34.744 "seek_data": false, 00:04:34.744 "copy": true, 00:04:34.744 "nvme_iov_md": false 00:04:34.744 }, 00:04:34.744 "memory_domains": [ 00:04:34.744 { 00:04:34.744 "dma_device_id": "system", 00:04:34.744 "dma_device_type": 1 00:04:34.744 }, 00:04:34.744 { 00:04:34.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.744 "dma_device_type": 2 00:04:34.744 } 00:04:34.744 ], 00:04:34.744 "driver_specific": {} 00:04:34.744 } 00:04:34.744 ]' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 [2024-12-06 18:16:29.442793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.744 [2024-12-06 18:16:29.442844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.744 [2024-12-06 18:16:29.442860] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e26f80 00:04:34.744 [2024-12-06 18:16:29.442869] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.744 [2024-12-06 18:16:29.444478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.744 [2024-12-06 18:16:29.444516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.744 Passthru0 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.744 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.744 { 00:04:34.744 "name": "Malloc0", 00:04:34.744 "aliases": [ 00:04:34.744 "0966474f-1031-4a14-9376-8e1a49ba23a4" 00:04:34.744 ], 00:04:34.744 "product_name": "Malloc disk", 00:04:34.744 "block_size": 512, 00:04:34.744 "num_blocks": 16384, 00:04:34.744 "uuid": "0966474f-1031-4a14-9376-8e1a49ba23a4", 00:04:34.744 "assigned_rate_limits": { 00:04:34.744 "rw_ios_per_sec": 0, 00:04:34.744 "rw_mbytes_per_sec": 0, 00:04:34.744 "r_mbytes_per_sec": 0, 00:04:34.744 "w_mbytes_per_sec": 0 00:04:34.744 }, 00:04:34.744 "claimed": true, 00:04:34.744 "claim_type": "exclusive_write", 00:04:34.744 "zoned": false, 00:04:34.744 "supported_io_types": { 00:04:34.744 "read": true, 00:04:34.744 "write": true, 00:04:34.744 "unmap": true, 00:04:34.744 "flush": true, 00:04:34.744 "reset": true, 00:04:34.744 "nvme_admin": false, 00:04:34.744 "nvme_io": false, 00:04:34.744 "nvme_io_md": false, 00:04:34.744 "write_zeroes": true, 00:04:34.744 "zcopy": true, 00:04:34.744 "get_zone_info": false, 00:04:34.744 "zone_management": false, 00:04:34.744 "zone_append": false, 00:04:34.744 "compare": false, 00:04:34.744 "compare_and_write": false, 00:04:34.744 "abort": true, 00:04:34.744 "seek_hole": false, 00:04:34.744 "seek_data": false, 00:04:34.744 "copy": true, 00:04:34.744 "nvme_iov_md": false 00:04:34.744 }, 00:04:34.744 "memory_domains": [ 00:04:34.744 { 00:04:34.744 "dma_device_id": "system", 00:04:34.744 "dma_device_type": 1 00:04:34.744 }, 00:04:34.744 { 00:04:34.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.744 "dma_device_type": 2 00:04:34.744 } 00:04:34.744 ], 00:04:34.744 "driver_specific": {} 00:04:34.744 }, 00:04:34.744 { 00:04:34.744 "name": "Passthru0", 00:04:34.744 "aliases": [ 00:04:34.744 "e5380a52-c864-5348-9ac6-5e76a928b503" 00:04:34.744 ], 00:04:34.744 "product_name": "passthru", 00:04:34.744 "block_size": 512, 00:04:34.744 "num_blocks": 16384, 00:04:34.744 "uuid": "e5380a52-c864-5348-9ac6-5e76a928b503", 00:04:34.744 "assigned_rate_limits": { 00:04:34.744 "rw_ios_per_sec": 0, 00:04:34.744 "rw_mbytes_per_sec": 0, 00:04:34.744 "r_mbytes_per_sec": 0, 00:04:34.744 "w_mbytes_per_sec": 0 00:04:34.744 }, 00:04:34.744 "claimed": false, 00:04:34.744 "zoned": false, 00:04:34.744 "supported_io_types": { 00:04:34.744 "read": true, 00:04:34.744 "write": true, 00:04:34.744 "unmap": true, 00:04:34.744 "flush": true, 00:04:34.744 "reset": true, 00:04:34.744 "nvme_admin": false, 00:04:34.744 "nvme_io": false, 00:04:34.744 "nvme_io_md": false, 00:04:34.744 "write_zeroes": true, 00:04:34.744 "zcopy": true, 00:04:34.744 "get_zone_info": false, 00:04:34.744 "zone_management": false, 00:04:34.744 "zone_append": false, 00:04:34.744 "compare": false, 00:04:34.744 "compare_and_write": false, 00:04:34.744 "abort": true, 00:04:34.744 "seek_hole": false, 00:04:34.744 "seek_data": false, 00:04:34.744 "copy": true, 00:04:34.744 "nvme_iov_md": false 00:04:34.744 }, 00:04:34.744 "memory_domains": [ 00:04:34.744 { 00:04:34.744 "dma_device_id": "system", 00:04:34.744 "dma_device_type": 1 00:04:34.744 }, 00:04:34.744 { 00:04:34.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.744 "dma_device_type": 2 00:04:34.744 } 00:04:34.744 ], 00:04:34.744 "driver_specific": { 00:04:34.744 "passthru": { 00:04:34.744 "name": "Passthru0", 00:04:34.744 "base_bdev_name": "Malloc0" 00:04:34.744 } 00:04:34.744 } 00:04:34.744 } 00:04:34.744 ]' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.744 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.745 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.745 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.006 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.006 18:16:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.006 00:04:35.006 real 0m0.302s 00:04:35.006 user 0m0.186s 00:04:35.006 sys 0m0.040s 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 ************************************ 00:04:35.006 END TEST rpc_integrity 00:04:35.006 ************************************ 00:04:35.006 18:16:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.006 18:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.006 18:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.006 18:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 ************************************ 00:04:35.006 START TEST rpc_plugins 00:04:35.006 ************************************ 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.006 { 00:04:35.006 "name": "Malloc1", 00:04:35.006 "aliases": [ 00:04:35.006 "a42e7cdf-f47d-4bba-97e3-66b6eba218d3" 00:04:35.006 ], 00:04:35.006 "product_name": "Malloc disk", 00:04:35.006 "block_size": 4096, 00:04:35.006 "num_blocks": 256, 00:04:35.006 "uuid": "a42e7cdf-f47d-4bba-97e3-66b6eba218d3", 00:04:35.006 "assigned_rate_limits": { 00:04:35.006 "rw_ios_per_sec": 0, 00:04:35.006 "rw_mbytes_per_sec": 0, 00:04:35.006 "r_mbytes_per_sec": 0, 00:04:35.006 "w_mbytes_per_sec": 0 00:04:35.006 }, 00:04:35.006 "claimed": false, 00:04:35.006 "zoned": false, 00:04:35.006 "supported_io_types": { 00:04:35.006 "read": true, 00:04:35.006 "write": true, 00:04:35.006 "unmap": true, 00:04:35.006 "flush": true, 00:04:35.006 "reset": true, 00:04:35.006 "nvme_admin": false, 00:04:35.006 "nvme_io": false, 00:04:35.006 "nvme_io_md": false, 00:04:35.006 "write_zeroes": true, 00:04:35.006 "zcopy": true, 00:04:35.006 "get_zone_info": false, 00:04:35.006 "zone_management": false, 00:04:35.006 "zone_append": false, 00:04:35.006 "compare": false, 00:04:35.006 "compare_and_write": false, 00:04:35.006 "abort": true, 00:04:35.006 "seek_hole": false, 00:04:35.006 "seek_data": false, 00:04:35.006 "copy": true, 00:04:35.006 "nvme_iov_md": false 00:04:35.006 }, 00:04:35.006 "memory_domains": [ 00:04:35.006 { 00:04:35.006 "dma_device_id": "system", 00:04:35.006 "dma_device_type": 1 00:04:35.006 }, 00:04:35.006 { 00:04:35.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.006 "dma_device_type": 2 00:04:35.006 } 00:04:35.006 ], 00:04:35.006 "driver_specific": {} 00:04:35.006 } 00:04:35.006 ]' 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.006 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.006 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.267 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.267 18:16:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.267 00:04:35.267 real 0m0.153s 00:04:35.267 user 0m0.090s 00:04:35.267 sys 0m0.027s 00:04:35.267 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.267 18:16:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.267 ************************************ 00:04:35.267 END TEST rpc_plugins 00:04:35.267 ************************************ 00:04:35.267 18:16:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.267 18:16:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.267 18:16:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.267 18:16:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.267 ************************************ 00:04:35.267 START TEST rpc_trace_cmd_test 00:04:35.267 ************************************ 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.267 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1892520", 00:04:35.267 "tpoint_group_mask": "0x8", 00:04:35.267 "iscsi_conn": { 00:04:35.267 "mask": "0x2", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "scsi": { 00:04:35.267 "mask": "0x4", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "bdev": { 00:04:35.267 "mask": "0x8", 00:04:35.267 "tpoint_mask": "0xffffffffffffffff" 00:04:35.267 }, 00:04:35.267 "nvmf_rdma": { 00:04:35.267 "mask": "0x10", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "nvmf_tcp": { 00:04:35.267 "mask": "0x20", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "ftl": { 00:04:35.267 "mask": "0x40", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "blobfs": { 00:04:35.267 "mask": "0x80", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "dsa": { 00:04:35.267 "mask": "0x200", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "thread": { 00:04:35.267 "mask": "0x400", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "nvme_pcie": { 00:04:35.267 "mask": "0x800", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "iaa": { 00:04:35.267 "mask": "0x1000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "nvme_tcp": { 00:04:35.267 "mask": "0x2000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "bdev_nvme": { 00:04:35.267 "mask": "0x4000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "sock": { 00:04:35.267 "mask": "0x8000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "blob": { 00:04:35.267 "mask": "0x10000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "bdev_raid": { 00:04:35.267 "mask": "0x20000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 }, 00:04:35.267 "scheduler": { 00:04:35.267 "mask": "0x40000", 00:04:35.267 "tpoint_mask": "0x0" 00:04:35.267 } 00:04:35.267 }' 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:35.267 18:16:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.267 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.267 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.527 00:04:35.527 real 0m0.258s 00:04:35.527 user 0m0.217s 00:04:35.527 sys 0m0.031s 00:04:35.527 18:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.528 18:16:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.528 ************************************ 00:04:35.528 END TEST rpc_trace_cmd_test 00:04:35.528 ************************************ 00:04:35.528 18:16:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.528 18:16:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.528 18:16:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.528 18:16:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.528 18:16:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.528 18:16:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.528 ************************************ 00:04:35.528 START TEST rpc_daemon_integrity 00:04:35.528 ************************************ 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.528 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.789 { 00:04:35.789 "name": "Malloc2", 00:04:35.789 "aliases": [ 00:04:35.789 "6a0deb07-150f-4642-9134-bc28892d93ba" 00:04:35.789 ], 00:04:35.789 "product_name": "Malloc disk", 00:04:35.789 "block_size": 512, 00:04:35.789 "num_blocks": 16384, 00:04:35.789 "uuid": "6a0deb07-150f-4642-9134-bc28892d93ba", 00:04:35.789 "assigned_rate_limits": { 00:04:35.789 "rw_ios_per_sec": 0, 00:04:35.789 "rw_mbytes_per_sec": 0, 00:04:35.789 "r_mbytes_per_sec": 0, 00:04:35.789 "w_mbytes_per_sec": 0 00:04:35.789 }, 00:04:35.789 "claimed": false, 00:04:35.789 "zoned": false, 00:04:35.789 "supported_io_types": { 00:04:35.789 "read": true, 00:04:35.789 "write": true, 00:04:35.789 "unmap": true, 00:04:35.789 "flush": true, 00:04:35.789 "reset": true, 00:04:35.789 "nvme_admin": false, 00:04:35.789 "nvme_io": false, 00:04:35.789 "nvme_io_md": false, 00:04:35.789 "write_zeroes": true, 00:04:35.789 "zcopy": true, 00:04:35.789 "get_zone_info": false, 00:04:35.789 "zone_management": false, 00:04:35.789 "zone_append": false, 00:04:35.789 "compare": false, 00:04:35.789 "compare_and_write": false, 00:04:35.789 "abort": true, 00:04:35.789 "seek_hole": false, 00:04:35.789 "seek_data": false, 00:04:35.789 "copy": true, 00:04:35.789 "nvme_iov_md": false 00:04:35.789 }, 00:04:35.789 "memory_domains": [ 00:04:35.789 { 00:04:35.789 "dma_device_id": "system", 00:04:35.789 "dma_device_type": 1 00:04:35.789 }, 00:04:35.789 { 00:04:35.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.789 "dma_device_type": 2 00:04:35.789 } 00:04:35.789 ], 00:04:35.789 "driver_specific": {} 00:04:35.789 } 00:04:35.789 ]' 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.789 [2024-12-06 18:16:30.393592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.789 [2024-12-06 18:16:30.393650] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.789 [2024-12-06 18:16:30.393671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e26c40 00:04:35.789 [2024-12-06 18:16:30.393680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.789 [2024-12-06 18:16:30.395226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.789 [2024-12-06 18:16:30.395263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.789 Passthru0 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.789 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.789 { 00:04:35.789 "name": "Malloc2", 00:04:35.789 "aliases": [ 00:04:35.789 "6a0deb07-150f-4642-9134-bc28892d93ba" 00:04:35.789 ], 00:04:35.789 "product_name": "Malloc disk", 00:04:35.789 "block_size": 512, 00:04:35.789 "num_blocks": 16384, 00:04:35.789 "uuid": "6a0deb07-150f-4642-9134-bc28892d93ba", 00:04:35.789 "assigned_rate_limits": { 00:04:35.789 "rw_ios_per_sec": 0, 00:04:35.789 "rw_mbytes_per_sec": 0, 00:04:35.789 "r_mbytes_per_sec": 0, 00:04:35.789 "w_mbytes_per_sec": 0 00:04:35.789 }, 00:04:35.789 "claimed": true, 00:04:35.789 "claim_type": "exclusive_write", 00:04:35.789 "zoned": false, 00:04:35.789 "supported_io_types": { 00:04:35.789 "read": true, 00:04:35.789 "write": true, 00:04:35.789 "unmap": true, 00:04:35.789 "flush": true, 00:04:35.789 "reset": true, 00:04:35.789 "nvme_admin": false, 00:04:35.789 "nvme_io": false, 00:04:35.789 "nvme_io_md": false, 00:04:35.789 "write_zeroes": true, 00:04:35.789 "zcopy": true, 00:04:35.789 "get_zone_info": false, 00:04:35.789 "zone_management": false, 00:04:35.789 "zone_append": false, 00:04:35.789 "compare": false, 00:04:35.789 "compare_and_write": false, 00:04:35.789 "abort": true, 00:04:35.789 "seek_hole": false, 00:04:35.789 "seek_data": false, 00:04:35.789 "copy": true, 00:04:35.789 "nvme_iov_md": false 00:04:35.789 }, 00:04:35.789 "memory_domains": [ 00:04:35.789 { 00:04:35.789 "dma_device_id": "system", 00:04:35.789 "dma_device_type": 1 00:04:35.789 }, 00:04:35.789 { 00:04:35.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.789 "dma_device_type": 2 00:04:35.789 } 00:04:35.789 ], 00:04:35.789 "driver_specific": {} 00:04:35.789 }, 00:04:35.789 { 00:04:35.789 "name": "Passthru0", 00:04:35.789 "aliases": [ 00:04:35.789 "fb0cae7e-608d-55a1-bc8c-130977f7404b" 00:04:35.789 ], 00:04:35.789 "product_name": "passthru", 00:04:35.789 "block_size": 512, 00:04:35.789 "num_blocks": 16384, 00:04:35.789 "uuid": "fb0cae7e-608d-55a1-bc8c-130977f7404b", 00:04:35.789 "assigned_rate_limits": { 00:04:35.789 "rw_ios_per_sec": 0, 00:04:35.789 "rw_mbytes_per_sec": 0, 00:04:35.789 "r_mbytes_per_sec": 0, 00:04:35.789 "w_mbytes_per_sec": 0 00:04:35.789 }, 00:04:35.789 "claimed": false, 00:04:35.789 "zoned": false, 00:04:35.789 "supported_io_types": { 00:04:35.789 "read": true, 00:04:35.789 "write": true, 00:04:35.789 "unmap": true, 00:04:35.789 "flush": true, 00:04:35.789 "reset": true, 00:04:35.789 "nvme_admin": false, 00:04:35.790 "nvme_io": false, 00:04:35.790 "nvme_io_md": false, 00:04:35.790 "write_zeroes": true, 00:04:35.790 "zcopy": true, 00:04:35.790 "get_zone_info": false, 00:04:35.790 "zone_management": false, 00:04:35.790 "zone_append": false, 00:04:35.790 "compare": false, 00:04:35.790 "compare_and_write": false, 00:04:35.790 "abort": true, 00:04:35.790 "seek_hole": false, 00:04:35.790 "seek_data": false, 00:04:35.790 "copy": true, 00:04:35.790 "nvme_iov_md": false 00:04:35.790 }, 00:04:35.790 "memory_domains": [ 00:04:35.790 { 00:04:35.790 "dma_device_id": "system", 00:04:35.790 "dma_device_type": 1 00:04:35.790 }, 00:04:35.790 { 00:04:35.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.790 "dma_device_type": 2 00:04:35.790 } 00:04:35.790 ], 00:04:35.790 "driver_specific": { 00:04:35.790 "passthru": { 00:04:35.790 "name": "Passthru0", 00:04:35.790 "base_bdev_name": "Malloc2" 00:04:35.790 } 00:04:35.790 } 00:04:35.790 } 00:04:35.790 ]' 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.790 00:04:35.790 real 0m0.309s 00:04:35.790 user 0m0.191s 00:04:35.790 sys 0m0.052s 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.790 18:16:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 ************************************ 00:04:35.790 END TEST rpc_daemon_integrity 00:04:35.790 ************************************ 00:04:36.050 18:16:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.050 18:16:30 rpc -- rpc/rpc.sh@84 -- # killprocess 1892520 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 1892520 ']' 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@958 -- # kill -0 1892520 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1892520 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1892520' 00:04:36.050 killing process with pid 1892520 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@973 -- # kill 1892520 00:04:36.050 18:16:30 rpc -- common/autotest_common.sh@978 -- # wait 1892520 00:04:36.310 00:04:36.310 real 0m2.739s 00:04:36.310 user 0m3.522s 00:04:36.310 sys 0m0.817s 00:04:36.310 18:16:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.310 18:16:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.310 ************************************ 00:04:36.310 END TEST rpc 00:04:36.310 ************************************ 00:04:36.310 18:16:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.310 18:16:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.310 18:16:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.310 18:16:30 -- common/autotest_common.sh@10 -- # set +x 00:04:36.310 ************************************ 00:04:36.310 START TEST skip_rpc 00:04:36.310 ************************************ 00:04:36.310 18:16:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:36.310 * Looking for test storage... 00:04:36.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.571 18:16:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.571 --rc genhtml_branch_coverage=1 00:04:36.571 --rc genhtml_function_coverage=1 00:04:36.571 --rc genhtml_legend=1 00:04:36.571 --rc geninfo_all_blocks=1 00:04:36.571 --rc geninfo_unexecuted_blocks=1 00:04:36.571 00:04:36.571 ' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.571 --rc genhtml_branch_coverage=1 00:04:36.571 --rc genhtml_function_coverage=1 00:04:36.571 --rc genhtml_legend=1 00:04:36.571 --rc geninfo_all_blocks=1 00:04:36.571 --rc geninfo_unexecuted_blocks=1 00:04:36.571 00:04:36.571 ' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.571 --rc genhtml_branch_coverage=1 00:04:36.571 --rc genhtml_function_coverage=1 00:04:36.571 --rc genhtml_legend=1 00:04:36.571 --rc geninfo_all_blocks=1 00:04:36.571 --rc geninfo_unexecuted_blocks=1 00:04:36.571 00:04:36.571 ' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.571 --rc genhtml_branch_coverage=1 00:04:36.571 --rc genhtml_function_coverage=1 00:04:36.571 --rc genhtml_legend=1 00:04:36.571 --rc geninfo_all_blocks=1 00:04:36.571 --rc geninfo_unexecuted_blocks=1 00:04:36.571 00:04:36.571 ' 00:04:36.571 18:16:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.571 18:16:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:36.571 18:16:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.571 18:16:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.571 ************************************ 00:04:36.571 START TEST skip_rpc 00:04:36.571 ************************************ 00:04:36.571 18:16:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:36.571 18:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1893370 00:04:36.571 18:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.571 18:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.571 18:16:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.571 [2024-12-06 18:16:31.290756] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:36.571 [2024-12-06 18:16:31.290812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893370 ] 00:04:36.831 [2024-12-06 18:16:31.383752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.831 [2024-12-06 18:16:31.436605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1893370 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1893370 ']' 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1893370 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1893370 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1893370' 00:04:42.113 killing process with pid 1893370 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1893370 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1893370 00:04:42.113 00:04:42.113 real 0m5.265s 00:04:42.113 user 0m5.016s 00:04:42.113 sys 0m0.299s 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.113 18:16:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.113 ************************************ 00:04:42.113 END TEST skip_rpc 00:04:42.113 ************************************ 00:04:42.113 18:16:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:42.113 18:16:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.113 18:16:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.113 18:16:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.113 ************************************ 00:04:42.113 START TEST skip_rpc_with_json 00:04:42.113 ************************************ 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1894409 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1894409 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1894409 ']' 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.113 18:16:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.113 [2024-12-06 18:16:36.632883] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:42.113 [2024-12-06 18:16:36.632935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894409 ] 00:04:42.113 [2024-12-06 18:16:36.715552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.113 [2024-12-06 18:16:36.750964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.684 [2024-12-06 18:16:37.422824] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.684 request: 00:04:42.684 { 00:04:42.684 "trtype": "tcp", 00:04:42.684 "method": "nvmf_get_transports", 00:04:42.684 "req_id": 1 00:04:42.684 } 00:04:42.684 Got JSON-RPC error response 00:04:42.684 response: 00:04:42.684 { 00:04:42.684 "code": -19, 00:04:42.684 "message": "No such device" 00:04:42.684 } 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.684 [2024-12-06 18:16:37.434923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.684 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.946 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.946 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.946 { 00:04:42.946 "subsystems": [ 00:04:42.946 { 00:04:42.946 "subsystem": "fsdev", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "fsdev_set_opts", 00:04:42.946 "params": { 00:04:42.946 "fsdev_io_pool_size": 65535, 00:04:42.946 "fsdev_io_cache_size": 256 00:04:42.946 } 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "vfio_user_target", 00:04:42.946 "config": null 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "keyring", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "iobuf", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "iobuf_set_options", 00:04:42.946 "params": { 00:04:42.946 "small_pool_count": 8192, 00:04:42.946 "large_pool_count": 1024, 00:04:42.946 "small_bufsize": 8192, 00:04:42.946 "large_bufsize": 135168, 00:04:42.946 "enable_numa": false 00:04:42.946 } 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "sock", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "sock_set_default_impl", 00:04:42.946 "params": { 00:04:42.946 "impl_name": "posix" 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "sock_impl_set_options", 00:04:42.946 "params": { 00:04:42.946 "impl_name": "ssl", 00:04:42.946 "recv_buf_size": 4096, 00:04:42.946 "send_buf_size": 4096, 00:04:42.946 "enable_recv_pipe": true, 00:04:42.946 "enable_quickack": false, 00:04:42.946 "enable_placement_id": 0, 00:04:42.946 "enable_zerocopy_send_server": true, 00:04:42.946 "enable_zerocopy_send_client": false, 00:04:42.946 "zerocopy_threshold": 0, 00:04:42.946 "tls_version": 0, 00:04:42.946 "enable_ktls": false 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "sock_impl_set_options", 00:04:42.946 "params": { 00:04:42.946 "impl_name": "posix", 00:04:42.946 "recv_buf_size": 2097152, 00:04:42.946 "send_buf_size": 2097152, 00:04:42.946 "enable_recv_pipe": true, 00:04:42.946 "enable_quickack": false, 00:04:42.946 "enable_placement_id": 0, 00:04:42.946 "enable_zerocopy_send_server": true, 00:04:42.946 "enable_zerocopy_send_client": false, 00:04:42.946 "zerocopy_threshold": 0, 00:04:42.946 "tls_version": 0, 00:04:42.946 "enable_ktls": false 00:04:42.946 } 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "vmd", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "accel", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "accel_set_options", 00:04:42.946 "params": { 00:04:42.946 "small_cache_size": 128, 00:04:42.946 "large_cache_size": 16, 00:04:42.946 "task_count": 2048, 00:04:42.946 "sequence_count": 2048, 00:04:42.946 "buf_count": 2048 00:04:42.946 } 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "bdev", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "bdev_set_options", 00:04:42.946 "params": { 00:04:42.946 "bdev_io_pool_size": 65535, 00:04:42.946 "bdev_io_cache_size": 256, 00:04:42.946 "bdev_auto_examine": true, 00:04:42.946 "iobuf_small_cache_size": 128, 00:04:42.946 "iobuf_large_cache_size": 16 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "bdev_raid_set_options", 00:04:42.946 "params": { 00:04:42.946 "process_window_size_kb": 1024, 00:04:42.946 "process_max_bandwidth_mb_sec": 0 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "bdev_iscsi_set_options", 00:04:42.946 "params": { 00:04:42.946 "timeout_sec": 30 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "bdev_nvme_set_options", 00:04:42.946 "params": { 00:04:42.946 "action_on_timeout": "none", 00:04:42.946 "timeout_us": 0, 00:04:42.946 "timeout_admin_us": 0, 00:04:42.946 "keep_alive_timeout_ms": 10000, 00:04:42.946 "arbitration_burst": 0, 00:04:42.946 "low_priority_weight": 0, 00:04:42.946 "medium_priority_weight": 0, 00:04:42.946 "high_priority_weight": 0, 00:04:42.946 "nvme_adminq_poll_period_us": 10000, 00:04:42.946 "nvme_ioq_poll_period_us": 0, 00:04:42.946 "io_queue_requests": 0, 00:04:42.946 "delay_cmd_submit": true, 00:04:42.946 "transport_retry_count": 4, 00:04:42.946 "bdev_retry_count": 3, 00:04:42.946 "transport_ack_timeout": 0, 00:04:42.946 "ctrlr_loss_timeout_sec": 0, 00:04:42.946 "reconnect_delay_sec": 0, 00:04:42.946 "fast_io_fail_timeout_sec": 0, 00:04:42.946 "disable_auto_failback": false, 00:04:42.946 "generate_uuids": false, 00:04:42.946 "transport_tos": 0, 00:04:42.946 "nvme_error_stat": false, 00:04:42.946 "rdma_srq_size": 0, 00:04:42.946 "io_path_stat": false, 00:04:42.946 "allow_accel_sequence": false, 00:04:42.946 "rdma_max_cq_size": 0, 00:04:42.946 "rdma_cm_event_timeout_ms": 0, 00:04:42.946 "dhchap_digests": [ 00:04:42.946 "sha256", 00:04:42.946 "sha384", 00:04:42.946 "sha512" 00:04:42.946 ], 00:04:42.946 "dhchap_dhgroups": [ 00:04:42.946 "null", 00:04:42.946 "ffdhe2048", 00:04:42.946 "ffdhe3072", 00:04:42.946 "ffdhe4096", 00:04:42.946 "ffdhe6144", 00:04:42.946 "ffdhe8192" 00:04:42.946 ] 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "bdev_nvme_set_hotplug", 00:04:42.946 "params": { 00:04:42.946 "period_us": 100000, 00:04:42.946 "enable": false 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "bdev_wait_for_examine" 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "scsi", 00:04:42.946 "config": null 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "scheduler", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "framework_set_scheduler", 00:04:42.946 "params": { 00:04:42.946 "name": "static" 00:04:42.946 } 00:04:42.946 } 00:04:42.946 ] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "vhost_scsi", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "vhost_blk", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "ublk", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "nbd", 00:04:42.946 "config": [] 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "subsystem": "nvmf", 00:04:42.946 "config": [ 00:04:42.946 { 00:04:42.946 "method": "nvmf_set_config", 00:04:42.946 "params": { 00:04:42.946 "discovery_filter": "match_any", 00:04:42.946 "admin_cmd_passthru": { 00:04:42.946 "identify_ctrlr": false 00:04:42.946 }, 00:04:42.946 "dhchap_digests": [ 00:04:42.946 "sha256", 00:04:42.946 "sha384", 00:04:42.946 "sha512" 00:04:42.946 ], 00:04:42.946 "dhchap_dhgroups": [ 00:04:42.946 "null", 00:04:42.946 "ffdhe2048", 00:04:42.946 "ffdhe3072", 00:04:42.946 "ffdhe4096", 00:04:42.946 "ffdhe6144", 00:04:42.946 "ffdhe8192" 00:04:42.946 ] 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "nvmf_set_max_subsystems", 00:04:42.946 "params": { 00:04:42.946 "max_subsystems": 1024 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "nvmf_set_crdt", 00:04:42.946 "params": { 00:04:42.946 "crdt1": 0, 00:04:42.946 "crdt2": 0, 00:04:42.946 "crdt3": 0 00:04:42.946 } 00:04:42.946 }, 00:04:42.946 { 00:04:42.946 "method": "nvmf_create_transport", 00:04:42.946 "params": { 00:04:42.946 "trtype": "TCP", 00:04:42.946 "max_queue_depth": 128, 00:04:42.947 "max_io_qpairs_per_ctrlr": 127, 00:04:42.947 "in_capsule_data_size": 4096, 00:04:42.947 "max_io_size": 131072, 00:04:42.947 "io_unit_size": 131072, 00:04:42.947 "max_aq_depth": 128, 00:04:42.947 "num_shared_buffers": 511, 00:04:42.947 "buf_cache_size": 4294967295, 00:04:42.947 "dif_insert_or_strip": false, 00:04:42.947 "zcopy": false, 00:04:42.947 "c2h_success": true, 00:04:42.947 "sock_priority": 0, 00:04:42.947 "abort_timeout_sec": 1, 00:04:42.947 "ack_timeout": 0, 00:04:42.947 "data_wr_pool_size": 0 00:04:42.947 } 00:04:42.947 } 00:04:42.947 ] 00:04:42.947 }, 00:04:42.947 { 00:04:42.947 "subsystem": "iscsi", 00:04:42.947 "config": [ 00:04:42.947 { 00:04:42.947 "method": "iscsi_set_options", 00:04:42.947 "params": { 00:04:42.947 "node_base": "iqn.2016-06.io.spdk", 00:04:42.947 "max_sessions": 128, 00:04:42.947 "max_connections_per_session": 2, 00:04:42.947 "max_queue_depth": 64, 00:04:42.947 "default_time2wait": 2, 00:04:42.947 "default_time2retain": 20, 00:04:42.947 "first_burst_length": 8192, 00:04:42.947 "immediate_data": true, 00:04:42.947 "allow_duplicated_isid": false, 00:04:42.947 "error_recovery_level": 0, 00:04:42.947 "nop_timeout": 60, 00:04:42.947 "nop_in_interval": 30, 00:04:42.947 "disable_chap": false, 00:04:42.947 "require_chap": false, 00:04:42.947 "mutual_chap": false, 00:04:42.947 "chap_group": 0, 00:04:42.947 "max_large_datain_per_connection": 64, 00:04:42.947 "max_r2t_per_connection": 4, 00:04:42.947 "pdu_pool_size": 36864, 00:04:42.947 "immediate_data_pool_size": 16384, 00:04:42.947 "data_out_pool_size": 2048 00:04:42.947 } 00:04:42.947 } 00:04:42.947 ] 00:04:42.947 } 00:04:42.947 ] 00:04:42.947 } 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1894409 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1894409 ']' 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1894409 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894409 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894409' 00:04:42.947 killing process with pid 1894409 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1894409 00:04:42.947 18:16:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1894409 00:04:43.207 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1894750 00:04:43.207 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:43.207 18:16:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1894750 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1894750 ']' 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1894750 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894750 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894750' 00:04:48.487 killing process with pid 1894750 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1894750 00:04:48.487 18:16:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1894750 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:48.487 00:04:48.487 real 0m6.553s 00:04:48.487 user 0m6.453s 00:04:48.487 sys 0m0.573s 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.487 ************************************ 00:04:48.487 END TEST skip_rpc_with_json 00:04:48.487 ************************************ 00:04:48.487 18:16:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.487 18:16:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.487 18:16:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.487 18:16:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.487 ************************************ 00:04:48.487 START TEST skip_rpc_with_delay 00:04:48.487 ************************************ 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.487 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:48.488 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.488 [2024-12-06 18:16:43.270105] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.748 00:04:48.748 real 0m0.080s 00:04:48.748 user 0m0.052s 00:04:48.748 sys 0m0.027s 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.748 18:16:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 ************************************ 00:04:48.748 END TEST skip_rpc_with_delay 00:04:48.748 ************************************ 00:04:48.748 18:16:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.748 18:16:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.748 18:16:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.748 18:16:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.748 18:16:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.748 18:16:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 ************************************ 00:04:48.748 START TEST exit_on_failed_rpc_init 00:04:48.748 ************************************ 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1895815 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1895815 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1895815 ']' 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.748 18:16:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.748 [2024-12-06 18:16:43.429372] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:48.748 [2024-12-06 18:16:43.429427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895815 ] 00:04:48.748 [2024-12-06 18:16:43.513522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.008 [2024-12-06 18:16:43.544257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.576 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.576 [2024-12-06 18:16:44.269083] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:49.576 [2024-12-06 18:16:44.269136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895972 ] 00:04:49.576 [2024-12-06 18:16:44.355454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.834 [2024-12-06 18:16:44.391228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.834 [2024-12-06 18:16:44.391276] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:49.834 [2024-12-06 18:16:44.391286] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:49.834 [2024-12-06 18:16:44.391293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1895815 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1895815 ']' 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1895815 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1895815 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1895815' 00:04:49.834 killing process with pid 1895815 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1895815 00:04:49.834 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1895815 00:04:50.093 00:04:50.093 real 0m1.308s 00:04:50.093 user 0m1.535s 00:04:50.093 sys 0m0.372s 00:04:50.093 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.093 18:16:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.093 ************************************ 00:04:50.093 END TEST exit_on_failed_rpc_init 00:04:50.093 ************************************ 00:04:50.093 18:16:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.093 00:04:50.093 real 0m13.733s 00:04:50.093 user 0m13.273s 00:04:50.093 sys 0m1.612s 00:04:50.093 18:16:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.093 18:16:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.093 ************************************ 00:04:50.093 END TEST skip_rpc 00:04:50.093 ************************************ 00:04:50.093 18:16:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.093 18:16:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.093 18:16:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.093 18:16:44 -- common/autotest_common.sh@10 -- # set +x 00:04:50.093 ************************************ 00:04:50.093 START TEST rpc_client 00:04:50.093 ************************************ 00:04:50.093 18:16:44 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.353 * Looking for test storage... 00:04:50.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:50.353 18:16:44 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.353 18:16:44 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.353 18:16:44 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.353 18:16:44 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.353 18:16:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.354 18:16:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.354 18:16:44 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.354 18:16:44 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.354 --rc genhtml_branch_coverage=1 00:04:50.354 --rc genhtml_function_coverage=1 00:04:50.354 --rc genhtml_legend=1 00:04:50.354 --rc geninfo_all_blocks=1 00:04:50.354 --rc geninfo_unexecuted_blocks=1 00:04:50.354 00:04:50.354 ' 00:04:50.354 18:16:44 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.354 --rc genhtml_branch_coverage=1 00:04:50.354 --rc genhtml_function_coverage=1 00:04:50.354 --rc genhtml_legend=1 00:04:50.354 --rc geninfo_all_blocks=1 00:04:50.354 --rc geninfo_unexecuted_blocks=1 00:04:50.354 00:04:50.354 ' 00:04:50.354 18:16:44 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.354 --rc genhtml_branch_coverage=1 00:04:50.354 --rc genhtml_function_coverage=1 00:04:50.354 --rc genhtml_legend=1 00:04:50.354 --rc geninfo_all_blocks=1 00:04:50.354 --rc geninfo_unexecuted_blocks=1 00:04:50.354 00:04:50.354 ' 00:04:50.354 18:16:44 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.354 --rc genhtml_branch_coverage=1 00:04:50.354 --rc genhtml_function_coverage=1 00:04:50.354 --rc genhtml_legend=1 00:04:50.354 --rc geninfo_all_blocks=1 00:04:50.354 --rc geninfo_unexecuted_blocks=1 00:04:50.354 00:04:50.354 ' 00:04:50.354 18:16:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:50.354 OK 00:04:50.354 18:16:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.354 00:04:50.354 real 0m0.226s 00:04:50.354 user 0m0.124s 00:04:50.354 sys 0m0.114s 00:04:50.354 18:16:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.354 18:16:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.354 ************************************ 00:04:50.354 END TEST rpc_client 00:04:50.354 ************************************ 00:04:50.354 18:16:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.354 18:16:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.354 18:16:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.354 18:16:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.354 ************************************ 00:04:50.354 START TEST json_config 00:04:50.354 ************************************ 00:04:50.354 18:16:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.614 18:16:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.614 18:16:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.614 18:16:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.614 18:16:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.614 18:16:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.614 18:16:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.614 18:16:45 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.614 18:16:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.614 18:16:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.614 18:16:45 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.614 18:16:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.614 18:16:45 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.614 18:16:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.614 18:16:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.614 18:16:45 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.614 --rc genhtml_branch_coverage=1 00:04:50.614 --rc genhtml_function_coverage=1 00:04:50.614 --rc genhtml_legend=1 00:04:50.614 --rc geninfo_all_blocks=1 00:04:50.614 --rc geninfo_unexecuted_blocks=1 00:04:50.614 00:04:50.614 ' 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.614 --rc genhtml_branch_coverage=1 00:04:50.614 --rc genhtml_function_coverage=1 00:04:50.614 --rc genhtml_legend=1 00:04:50.614 --rc geninfo_all_blocks=1 00:04:50.614 --rc geninfo_unexecuted_blocks=1 00:04:50.614 00:04:50.614 ' 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.614 --rc genhtml_branch_coverage=1 00:04:50.614 --rc genhtml_function_coverage=1 00:04:50.614 --rc genhtml_legend=1 00:04:50.614 --rc geninfo_all_blocks=1 00:04:50.614 --rc geninfo_unexecuted_blocks=1 00:04:50.614 00:04:50.614 ' 00:04:50.614 18:16:45 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.614 --rc genhtml_branch_coverage=1 00:04:50.614 --rc genhtml_function_coverage=1 00:04:50.614 --rc genhtml_legend=1 00:04:50.614 --rc geninfo_all_blocks=1 00:04:50.614 --rc geninfo_unexecuted_blocks=1 00:04:50.614 00:04:50.614 ' 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.614 18:16:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.614 18:16:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.614 18:16:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.614 18:16:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.614 18:16:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.614 18:16:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.614 18:16:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.614 18:16:45 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.614 18:16:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.614 18:16:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:50.614 18:16:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:50.615 INFO: JSON configuration test init 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.615 18:16:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:50.615 18:16:45 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.615 18:16:45 json_config -- json_config/common.sh@10 -- # shift 00:04:50.615 18:16:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.615 18:16:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.615 18:16:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.615 18:16:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.615 18:16:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.615 18:16:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1896287 00:04:50.615 18:16:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.615 Waiting for target to run... 00:04:50.615 18:16:45 json_config -- json_config/common.sh@25 -- # waitforlisten 1896287 /var/tmp/spdk_tgt.sock 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 1896287 ']' 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.615 18:16:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:50.615 18:16:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.874 [2024-12-06 18:16:45.402701] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:50.874 [2024-12-06 18:16:45.402778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896287 ] 00:04:51.133 [2024-12-06 18:16:45.700867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.133 [2024-12-06 18:16:45.725382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:51.702 18:16:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.702 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.702 18:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:51.702 18:16:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:51.702 18:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:52.273 18:16:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@54 -- # sort 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:52.273 18:16:46 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.273 18:16:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 18:16:47 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:52.273 18:16:47 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:52.273 18:16:47 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:52.273 18:16:47 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.273 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:52.534 MallocForNvmf0 00:04:52.534 18:16:47 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.534 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:52.794 MallocForNvmf1 00:04:52.794 18:16:47 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.794 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:52.794 [2024-12-06 18:16:47.530873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:52.794 18:16:47 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:52.794 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:53.055 18:16:47 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.055 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:53.315 18:16:47 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.315 18:16:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:53.315 18:16:48 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.575 18:16:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:53.575 [2024-12-06 18:16:48.249074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:53.575 18:16:48 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:53.575 18:16:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.575 18:16:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 18:16:48 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:53.575 18:16:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.575 18:16:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.575 18:16:48 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:53.575 18:16:48 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.575 18:16:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:53.836 MallocBdevForConfigChangeCheck 00:04:53.836 18:16:48 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:53.836 18:16:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.836 18:16:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.836 18:16:48 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:53.836 18:16:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.109 18:16:48 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:54.109 INFO: shutting down applications... 00:04:54.109 18:16:48 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:54.109 18:16:48 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:54.109 18:16:48 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:54.109 18:16:48 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:54.681 Calling clear_iscsi_subsystem 00:04:54.681 Calling clear_nvmf_subsystem 00:04:54.681 Calling clear_nbd_subsystem 00:04:54.681 Calling clear_ublk_subsystem 00:04:54.681 Calling clear_vhost_blk_subsystem 00:04:54.681 Calling clear_vhost_scsi_subsystem 00:04:54.681 Calling clear_bdev_subsystem 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:54.681 18:16:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:54.943 18:16:49 json_config -- json_config/json_config.sh@352 -- # break 00:04:54.943 18:16:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:54.943 18:16:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:54.943 18:16:49 json_config -- json_config/common.sh@31 -- # local app=target 00:04:54.943 18:16:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.943 18:16:49 json_config -- json_config/common.sh@35 -- # [[ -n 1896287 ]] 00:04:54.943 18:16:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1896287 00:04:54.943 18:16:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.943 18:16:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.943 18:16:49 json_config -- json_config/common.sh@41 -- # kill -0 1896287 00:04:54.943 18:16:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.514 18:16:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.514 18:16:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.514 18:16:50 json_config -- json_config/common.sh@41 -- # kill -0 1896287 00:04:55.514 18:16:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:55.514 18:16:50 json_config -- json_config/common.sh@43 -- # break 00:04:55.514 18:16:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:55.514 18:16:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:55.514 SPDK target shutdown done 00:04:55.514 18:16:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:55.514 INFO: relaunching applications... 00:04:55.514 18:16:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.514 18:16:50 json_config -- json_config/common.sh@9 -- # local app=target 00:04:55.514 18:16:50 json_config -- json_config/common.sh@10 -- # shift 00:04:55.514 18:16:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.514 18:16:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.514 18:16:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.514 18:16:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.514 18:16:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.514 18:16:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1897424 00:04:55.514 18:16:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.514 Waiting for target to run... 00:04:55.514 18:16:50 json_config -- json_config/common.sh@25 -- # waitforlisten 1897424 /var/tmp/spdk_tgt.sock 00:04:55.514 18:16:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 1897424 ']' 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.515 18:16:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.515 [2024-12-06 18:16:50.243029] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:55.515 [2024-12-06 18:16:50.243085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897424 ] 00:04:56.086 [2024-12-06 18:16:50.565650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.086 [2024-12-06 18:16:50.592176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.346 [2024-12-06 18:16:51.093511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.346 [2024-12-06 18:16:51.125885] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.606 18:16:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.606 18:16:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:56.606 18:16:51 json_config -- json_config/common.sh@26 -- # echo '' 00:04:56.606 00:04:56.606 18:16:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:56.606 18:16:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:56.606 INFO: Checking if target configuration is the same... 00:04:56.606 18:16:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.606 18:16:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:56.606 18:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.606 + '[' 2 -ne 2 ']' 00:04:56.606 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:56.606 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:56.606 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:56.606 +++ basename /dev/fd/62 00:04:56.606 ++ mktemp /tmp/62.XXX 00:04:56.606 + tmp_file_1=/tmp/62.AQt 00:04:56.606 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.606 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:56.606 + tmp_file_2=/tmp/spdk_tgt_config.json.3cM 00:04:56.606 + ret=0 00:04:56.606 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.868 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:56.868 + diff -u /tmp/62.AQt /tmp/spdk_tgt_config.json.3cM 00:04:56.868 + echo 'INFO: JSON config files are the same' 00:04:56.868 INFO: JSON config files are the same 00:04:56.868 + rm /tmp/62.AQt /tmp/spdk_tgt_config.json.3cM 00:04:56.868 + exit 0 00:04:56.868 18:16:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:56.868 18:16:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:56.868 INFO: changing configuration and checking if this can be detected... 00:04:56.868 18:16:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:56.868 18:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:57.129 18:16:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:57.129 18:16:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.129 18:16:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.129 + '[' 2 -ne 2 ']' 00:04:57.129 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:57.129 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:57.129 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:57.129 +++ basename /dev/fd/62 00:04:57.129 ++ mktemp /tmp/62.XXX 00:04:57.129 + tmp_file_1=/tmp/62.jhF 00:04:57.129 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.129 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:57.129 + tmp_file_2=/tmp/spdk_tgt_config.json.qGy 00:04:57.129 + ret=0 00:04:57.129 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.390 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:57.390 + diff -u /tmp/62.jhF /tmp/spdk_tgt_config.json.qGy 00:04:57.390 + ret=1 00:04:57.390 + echo '=== Start of file: /tmp/62.jhF ===' 00:04:57.390 + cat /tmp/62.jhF 00:04:57.390 + echo '=== End of file: /tmp/62.jhF ===' 00:04:57.390 + echo '' 00:04:57.390 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qGy ===' 00:04:57.390 + cat /tmp/spdk_tgt_config.json.qGy 00:04:57.390 + echo '=== End of file: /tmp/spdk_tgt_config.json.qGy ===' 00:04:57.390 + echo '' 00:04:57.390 + rm /tmp/62.jhF /tmp/spdk_tgt_config.json.qGy 00:04:57.390 + exit 1 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:57.390 INFO: configuration change detected. 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 1897424 ]] 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.390 18:16:52 json_config -- json_config/json_config.sh@330 -- # killprocess 1897424 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 1897424 ']' 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@958 -- # kill -0 1897424 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@959 -- # uname 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.390 18:16:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1897424 00:04:57.651 18:16:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.651 18:16:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.651 18:16:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1897424' 00:04:57.651 killing process with pid 1897424 00:04:57.651 18:16:52 json_config -- common/autotest_common.sh@973 -- # kill 1897424 00:04:57.651 18:16:52 json_config -- common/autotest_common.sh@978 -- # wait 1897424 00:04:57.912 18:16:52 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:57.912 18:16:52 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:57.912 18:16:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.912 18:16:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.912 18:16:52 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:57.912 18:16:52 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:57.912 INFO: Success 00:04:57.912 00:04:57.912 real 0m7.425s 00:04:57.912 user 0m8.957s 00:04:57.912 sys 0m1.984s 00:04:57.912 18:16:52 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.912 18:16:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.912 ************************************ 00:04:57.912 END TEST json_config 00:04:57.912 ************************************ 00:04:57.912 18:16:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:57.912 18:16:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.912 18:16:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.912 18:16:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.912 ************************************ 00:04:57.912 START TEST json_config_extra_key 00:04:57.912 ************************************ 00:04:57.912 18:16:52 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:57.912 18:16:52 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.912 18:16:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.912 18:16:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.173 18:16:52 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.173 18:16:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.173 18:16:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.174 18:16:52 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.174 18:16:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.174 --rc genhtml_branch_coverage=1 00:04:58.174 --rc genhtml_function_coverage=1 00:04:58.174 --rc genhtml_legend=1 00:04:58.174 --rc geninfo_all_blocks=1 00:04:58.174 --rc geninfo_unexecuted_blocks=1 00:04:58.174 00:04:58.174 ' 00:04:58.174 18:16:52 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.174 --rc genhtml_branch_coverage=1 00:04:58.174 --rc genhtml_function_coverage=1 00:04:58.174 --rc genhtml_legend=1 00:04:58.174 --rc geninfo_all_blocks=1 00:04:58.174 --rc geninfo_unexecuted_blocks=1 00:04:58.174 00:04:58.174 ' 00:04:58.174 18:16:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.174 --rc genhtml_branch_coverage=1 00:04:58.174 --rc genhtml_function_coverage=1 00:04:58.174 --rc genhtml_legend=1 00:04:58.174 --rc geninfo_all_blocks=1 00:04:58.174 --rc geninfo_unexecuted_blocks=1 00:04:58.174 00:04:58.174 ' 00:04:58.174 18:16:52 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.174 --rc genhtml_branch_coverage=1 00:04:58.174 --rc genhtml_function_coverage=1 00:04:58.174 --rc genhtml_legend=1 00:04:58.174 --rc geninfo_all_blocks=1 00:04:58.174 --rc geninfo_unexecuted_blocks=1 00:04:58.174 00:04:58.174 ' 00:04:58.174 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.174 18:16:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.174 18:16:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.174 18:16:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.174 18:16:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.174 18:16:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.174 18:16:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.174 18:16:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.175 18:16:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.175 INFO: launching applications... 00:04:58.175 18:16:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1898128 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.175 Waiting for target to run... 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1898128 /var/tmp/spdk_tgt.sock 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1898128 ']' 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.175 18:16:52 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.175 18:16:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.175 [2024-12-06 18:16:52.888905] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:58.175 [2024-12-06 18:16:52.888979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898128 ] 00:04:58.747 [2024-12-06 18:16:53.307272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.747 [2024-12-06 18:16:53.332586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.007 18:16:53 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.007 18:16:53 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:59.007 00:04:59.007 18:16:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:59.007 INFO: shutting down applications... 00:04:59.007 18:16:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1898128 ]] 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1898128 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1898128 00:04:59.007 18:16:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1898128 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:59.578 18:16:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:59.578 SPDK target shutdown done 00:04:59.578 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:59.578 Success 00:04:59.578 00:04:59.578 real 0m1.584s 00:04:59.578 user 0m1.095s 00:04:59.578 sys 0m0.532s 00:04:59.578 18:16:54 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.578 18:16:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:59.578 ************************************ 00:04:59.578 END TEST json_config_extra_key 00:04:59.578 ************************************ 00:04:59.578 18:16:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.578 18:16:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.578 18:16:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.578 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:04:59.578 ************************************ 00:04:59.578 START TEST alias_rpc 00:04:59.578 ************************************ 00:04:59.578 18:16:54 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:59.839 * Looking for test storage... 00:04:59.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.839 18:16:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.839 --rc genhtml_branch_coverage=1 00:04:59.839 --rc genhtml_function_coverage=1 00:04:59.839 --rc genhtml_legend=1 00:04:59.839 --rc geninfo_all_blocks=1 00:04:59.839 --rc geninfo_unexecuted_blocks=1 00:04:59.839 00:04:59.839 ' 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.839 --rc genhtml_branch_coverage=1 00:04:59.839 --rc genhtml_function_coverage=1 00:04:59.839 --rc genhtml_legend=1 00:04:59.839 --rc geninfo_all_blocks=1 00:04:59.839 --rc geninfo_unexecuted_blocks=1 00:04:59.839 00:04:59.839 ' 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.839 --rc genhtml_branch_coverage=1 00:04:59.839 --rc genhtml_function_coverage=1 00:04:59.839 --rc genhtml_legend=1 00:04:59.839 --rc geninfo_all_blocks=1 00:04:59.839 --rc geninfo_unexecuted_blocks=1 00:04:59.839 00:04:59.839 ' 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:59.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.839 --rc genhtml_branch_coverage=1 00:04:59.839 --rc genhtml_function_coverage=1 00:04:59.839 --rc genhtml_legend=1 00:04:59.839 --rc geninfo_all_blocks=1 00:04:59.839 --rc geninfo_unexecuted_blocks=1 00:04:59.839 00:04:59.839 ' 00:04:59.839 18:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:59.839 18:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1898488 00:04:59.839 18:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1898488 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1898488 ']' 00:04:59.839 18:16:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.839 18:16:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.839 [2024-12-06 18:16:54.544494] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:04:59.840 [2024-12-06 18:16:54.544572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898488 ] 00:05:00.100 [2024-12-06 18:16:54.630613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.100 [2024-12-06 18:16:54.665764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.691 18:16:55 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.691 18:16:55 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.691 18:16:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:00.950 18:16:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1898488 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1898488 ']' 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1898488 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898488 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898488' 00:05:00.950 killing process with pid 1898488 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@973 -- # kill 1898488 00:05:00.950 18:16:55 alias_rpc -- common/autotest_common.sh@978 -- # wait 1898488 00:05:01.210 00:05:01.210 real 0m1.486s 00:05:01.210 user 0m1.620s 00:05:01.210 sys 0m0.419s 00:05:01.210 18:16:55 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.210 18:16:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.210 ************************************ 00:05:01.210 END TEST alias_rpc 00:05:01.210 ************************************ 00:05:01.210 18:16:55 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:01.210 18:16:55 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.210 18:16:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.210 18:16:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.210 18:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:01.210 ************************************ 00:05:01.210 START TEST spdkcli_tcp 00:05:01.210 ************************************ 00:05:01.210 18:16:55 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:01.210 * Looking for test storage... 00:05:01.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:01.210 18:16:55 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:01.210 18:16:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:01.210 18:16:55 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.472 18:16:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:01.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.472 --rc genhtml_branch_coverage=1 00:05:01.472 --rc genhtml_function_coverage=1 00:05:01.472 --rc genhtml_legend=1 00:05:01.472 --rc geninfo_all_blocks=1 00:05:01.472 --rc geninfo_unexecuted_blocks=1 00:05:01.472 00:05:01.472 ' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:01.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.472 --rc genhtml_branch_coverage=1 00:05:01.472 --rc genhtml_function_coverage=1 00:05:01.472 --rc genhtml_legend=1 00:05:01.472 --rc geninfo_all_blocks=1 00:05:01.472 --rc geninfo_unexecuted_blocks=1 00:05:01.472 00:05:01.472 ' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:01.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.472 --rc genhtml_branch_coverage=1 00:05:01.472 --rc genhtml_function_coverage=1 00:05:01.472 --rc genhtml_legend=1 00:05:01.472 --rc geninfo_all_blocks=1 00:05:01.472 --rc geninfo_unexecuted_blocks=1 00:05:01.472 00:05:01.472 ' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:01.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.472 --rc genhtml_branch_coverage=1 00:05:01.472 --rc genhtml_function_coverage=1 00:05:01.472 --rc genhtml_legend=1 00:05:01.472 --rc geninfo_all_blocks=1 00:05:01.472 --rc geninfo_unexecuted_blocks=1 00:05:01.472 00:05:01.472 ' 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1898814 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1898814 00:05:01.472 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1898814 ']' 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.472 18:16:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.472 [2024-12-06 18:16:56.116289] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:01.472 [2024-12-06 18:16:56.116370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898814 ] 00:05:01.472 [2024-12-06 18:16:56.203481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.472 [2024-12-06 18:16:56.239347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.472 [2024-12-06 18:16:56.239348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.413 18:16:56 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.413 18:16:56 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:02.413 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1899021 00:05:02.413 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:02.413 18:16:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:02.413 [ 00:05:02.413 "bdev_malloc_delete", 00:05:02.413 "bdev_malloc_create", 00:05:02.413 "bdev_null_resize", 00:05:02.413 "bdev_null_delete", 00:05:02.413 "bdev_null_create", 00:05:02.413 "bdev_nvme_cuse_unregister", 00:05:02.413 "bdev_nvme_cuse_register", 00:05:02.413 "bdev_opal_new_user", 00:05:02.413 "bdev_opal_set_lock_state", 00:05:02.413 "bdev_opal_delete", 00:05:02.413 "bdev_opal_get_info", 00:05:02.413 "bdev_opal_create", 00:05:02.413 "bdev_nvme_opal_revert", 00:05:02.413 "bdev_nvme_opal_init", 00:05:02.413 "bdev_nvme_send_cmd", 00:05:02.413 "bdev_nvme_set_keys", 00:05:02.413 "bdev_nvme_get_path_iostat", 00:05:02.413 "bdev_nvme_get_mdns_discovery_info", 00:05:02.413 "bdev_nvme_stop_mdns_discovery", 00:05:02.413 "bdev_nvme_start_mdns_discovery", 00:05:02.413 "bdev_nvme_set_multipath_policy", 00:05:02.413 "bdev_nvme_set_preferred_path", 00:05:02.413 "bdev_nvme_get_io_paths", 00:05:02.413 "bdev_nvme_remove_error_injection", 00:05:02.413 "bdev_nvme_add_error_injection", 00:05:02.413 "bdev_nvme_get_discovery_info", 00:05:02.413 "bdev_nvme_stop_discovery", 00:05:02.413 "bdev_nvme_start_discovery", 00:05:02.413 "bdev_nvme_get_controller_health_info", 00:05:02.413 "bdev_nvme_disable_controller", 00:05:02.413 "bdev_nvme_enable_controller", 00:05:02.413 "bdev_nvme_reset_controller", 00:05:02.413 "bdev_nvme_get_transport_statistics", 00:05:02.413 "bdev_nvme_apply_firmware", 00:05:02.413 "bdev_nvme_detach_controller", 00:05:02.413 "bdev_nvme_get_controllers", 00:05:02.413 "bdev_nvme_attach_controller", 00:05:02.413 "bdev_nvme_set_hotplug", 00:05:02.413 "bdev_nvme_set_options", 00:05:02.413 "bdev_passthru_delete", 00:05:02.413 "bdev_passthru_create", 00:05:02.413 "bdev_lvol_set_parent_bdev", 00:05:02.413 "bdev_lvol_set_parent", 00:05:02.413 "bdev_lvol_check_shallow_copy", 00:05:02.413 "bdev_lvol_start_shallow_copy", 00:05:02.413 "bdev_lvol_grow_lvstore", 00:05:02.413 "bdev_lvol_get_lvols", 00:05:02.413 "bdev_lvol_get_lvstores", 00:05:02.413 "bdev_lvol_delete", 00:05:02.413 "bdev_lvol_set_read_only", 00:05:02.413 "bdev_lvol_resize", 00:05:02.413 "bdev_lvol_decouple_parent", 00:05:02.413 "bdev_lvol_inflate", 00:05:02.413 "bdev_lvol_rename", 00:05:02.413 "bdev_lvol_clone_bdev", 00:05:02.413 "bdev_lvol_clone", 00:05:02.413 "bdev_lvol_snapshot", 00:05:02.413 "bdev_lvol_create", 00:05:02.413 "bdev_lvol_delete_lvstore", 00:05:02.413 "bdev_lvol_rename_lvstore", 00:05:02.413 "bdev_lvol_create_lvstore", 00:05:02.413 "bdev_raid_set_options", 00:05:02.413 "bdev_raid_remove_base_bdev", 00:05:02.413 "bdev_raid_add_base_bdev", 00:05:02.413 "bdev_raid_delete", 00:05:02.413 "bdev_raid_create", 00:05:02.413 "bdev_raid_get_bdevs", 00:05:02.413 "bdev_error_inject_error", 00:05:02.413 "bdev_error_delete", 00:05:02.413 "bdev_error_create", 00:05:02.413 "bdev_split_delete", 00:05:02.413 "bdev_split_create", 00:05:02.413 "bdev_delay_delete", 00:05:02.413 "bdev_delay_create", 00:05:02.413 "bdev_delay_update_latency", 00:05:02.413 "bdev_zone_block_delete", 00:05:02.413 "bdev_zone_block_create", 00:05:02.413 "blobfs_create", 00:05:02.413 "blobfs_detect", 00:05:02.413 "blobfs_set_cache_size", 00:05:02.413 "bdev_aio_delete", 00:05:02.413 "bdev_aio_rescan", 00:05:02.413 "bdev_aio_create", 00:05:02.413 "bdev_ftl_set_property", 00:05:02.413 "bdev_ftl_get_properties", 00:05:02.413 "bdev_ftl_get_stats", 00:05:02.413 "bdev_ftl_unmap", 00:05:02.413 "bdev_ftl_unload", 00:05:02.413 "bdev_ftl_delete", 00:05:02.413 "bdev_ftl_load", 00:05:02.413 "bdev_ftl_create", 00:05:02.413 "bdev_virtio_attach_controller", 00:05:02.413 "bdev_virtio_scsi_get_devices", 00:05:02.413 "bdev_virtio_detach_controller", 00:05:02.413 "bdev_virtio_blk_set_hotplug", 00:05:02.413 "bdev_iscsi_delete", 00:05:02.413 "bdev_iscsi_create", 00:05:02.413 "bdev_iscsi_set_options", 00:05:02.413 "accel_error_inject_error", 00:05:02.413 "ioat_scan_accel_module", 00:05:02.413 "dsa_scan_accel_module", 00:05:02.413 "iaa_scan_accel_module", 00:05:02.413 "vfu_virtio_create_fs_endpoint", 00:05:02.413 "vfu_virtio_create_scsi_endpoint", 00:05:02.413 "vfu_virtio_scsi_remove_target", 00:05:02.413 "vfu_virtio_scsi_add_target", 00:05:02.413 "vfu_virtio_create_blk_endpoint", 00:05:02.413 "vfu_virtio_delete_endpoint", 00:05:02.413 "keyring_file_remove_key", 00:05:02.413 "keyring_file_add_key", 00:05:02.413 "keyring_linux_set_options", 00:05:02.413 "fsdev_aio_delete", 00:05:02.413 "fsdev_aio_create", 00:05:02.414 "iscsi_get_histogram", 00:05:02.414 "iscsi_enable_histogram", 00:05:02.414 "iscsi_set_options", 00:05:02.414 "iscsi_get_auth_groups", 00:05:02.414 "iscsi_auth_group_remove_secret", 00:05:02.414 "iscsi_auth_group_add_secret", 00:05:02.414 "iscsi_delete_auth_group", 00:05:02.414 "iscsi_create_auth_group", 00:05:02.414 "iscsi_set_discovery_auth", 00:05:02.414 "iscsi_get_options", 00:05:02.414 "iscsi_target_node_request_logout", 00:05:02.414 "iscsi_target_node_set_redirect", 00:05:02.414 "iscsi_target_node_set_auth", 00:05:02.414 "iscsi_target_node_add_lun", 00:05:02.414 "iscsi_get_stats", 00:05:02.414 "iscsi_get_connections", 00:05:02.414 "iscsi_portal_group_set_auth", 00:05:02.414 "iscsi_start_portal_group", 00:05:02.414 "iscsi_delete_portal_group", 00:05:02.414 "iscsi_create_portal_group", 00:05:02.414 "iscsi_get_portal_groups", 00:05:02.414 "iscsi_delete_target_node", 00:05:02.414 "iscsi_target_node_remove_pg_ig_maps", 00:05:02.414 "iscsi_target_node_add_pg_ig_maps", 00:05:02.414 "iscsi_create_target_node", 00:05:02.414 "iscsi_get_target_nodes", 00:05:02.414 "iscsi_delete_initiator_group", 00:05:02.414 "iscsi_initiator_group_remove_initiators", 00:05:02.414 "iscsi_initiator_group_add_initiators", 00:05:02.414 "iscsi_create_initiator_group", 00:05:02.414 "iscsi_get_initiator_groups", 00:05:02.414 "nvmf_set_crdt", 00:05:02.414 "nvmf_set_config", 00:05:02.414 "nvmf_set_max_subsystems", 00:05:02.414 "nvmf_stop_mdns_prr", 00:05:02.414 "nvmf_publish_mdns_prr", 00:05:02.414 "nvmf_subsystem_get_listeners", 00:05:02.414 "nvmf_subsystem_get_qpairs", 00:05:02.414 "nvmf_subsystem_get_controllers", 00:05:02.414 "nvmf_get_stats", 00:05:02.414 "nvmf_get_transports", 00:05:02.414 "nvmf_create_transport", 00:05:02.414 "nvmf_get_targets", 00:05:02.414 "nvmf_delete_target", 00:05:02.414 "nvmf_create_target", 00:05:02.414 "nvmf_subsystem_allow_any_host", 00:05:02.414 "nvmf_subsystem_set_keys", 00:05:02.414 "nvmf_subsystem_remove_host", 00:05:02.414 "nvmf_subsystem_add_host", 00:05:02.414 "nvmf_ns_remove_host", 00:05:02.414 "nvmf_ns_add_host", 00:05:02.414 "nvmf_subsystem_remove_ns", 00:05:02.414 "nvmf_subsystem_set_ns_ana_group", 00:05:02.414 "nvmf_subsystem_add_ns", 00:05:02.414 "nvmf_subsystem_listener_set_ana_state", 00:05:02.414 "nvmf_discovery_get_referrals", 00:05:02.414 "nvmf_discovery_remove_referral", 00:05:02.414 "nvmf_discovery_add_referral", 00:05:02.414 "nvmf_subsystem_remove_listener", 00:05:02.414 "nvmf_subsystem_add_listener", 00:05:02.414 "nvmf_delete_subsystem", 00:05:02.414 "nvmf_create_subsystem", 00:05:02.414 "nvmf_get_subsystems", 00:05:02.414 "env_dpdk_get_mem_stats", 00:05:02.414 "nbd_get_disks", 00:05:02.414 "nbd_stop_disk", 00:05:02.414 "nbd_start_disk", 00:05:02.414 "ublk_recover_disk", 00:05:02.414 "ublk_get_disks", 00:05:02.414 "ublk_stop_disk", 00:05:02.414 "ublk_start_disk", 00:05:02.414 "ublk_destroy_target", 00:05:02.414 "ublk_create_target", 00:05:02.414 "virtio_blk_create_transport", 00:05:02.414 "virtio_blk_get_transports", 00:05:02.414 "vhost_controller_set_coalescing", 00:05:02.414 "vhost_get_controllers", 00:05:02.414 "vhost_delete_controller", 00:05:02.414 "vhost_create_blk_controller", 00:05:02.414 "vhost_scsi_controller_remove_target", 00:05:02.414 "vhost_scsi_controller_add_target", 00:05:02.414 "vhost_start_scsi_controller", 00:05:02.414 "vhost_create_scsi_controller", 00:05:02.414 "thread_set_cpumask", 00:05:02.414 "scheduler_set_options", 00:05:02.414 "framework_get_governor", 00:05:02.414 "framework_get_scheduler", 00:05:02.414 "framework_set_scheduler", 00:05:02.414 "framework_get_reactors", 00:05:02.414 "thread_get_io_channels", 00:05:02.414 "thread_get_pollers", 00:05:02.414 "thread_get_stats", 00:05:02.414 "framework_monitor_context_switch", 00:05:02.414 "spdk_kill_instance", 00:05:02.414 "log_enable_timestamps", 00:05:02.414 "log_get_flags", 00:05:02.414 "log_clear_flag", 00:05:02.414 "log_set_flag", 00:05:02.414 "log_get_level", 00:05:02.414 "log_set_level", 00:05:02.414 "log_get_print_level", 00:05:02.414 "log_set_print_level", 00:05:02.414 "framework_enable_cpumask_locks", 00:05:02.414 "framework_disable_cpumask_locks", 00:05:02.414 "framework_wait_init", 00:05:02.414 "framework_start_init", 00:05:02.414 "scsi_get_devices", 00:05:02.414 "bdev_get_histogram", 00:05:02.414 "bdev_enable_histogram", 00:05:02.414 "bdev_set_qos_limit", 00:05:02.414 "bdev_set_qd_sampling_period", 00:05:02.414 "bdev_get_bdevs", 00:05:02.414 "bdev_reset_iostat", 00:05:02.414 "bdev_get_iostat", 00:05:02.414 "bdev_examine", 00:05:02.414 "bdev_wait_for_examine", 00:05:02.414 "bdev_set_options", 00:05:02.414 "accel_get_stats", 00:05:02.414 "accel_set_options", 00:05:02.414 "accel_set_driver", 00:05:02.414 "accel_crypto_key_destroy", 00:05:02.414 "accel_crypto_keys_get", 00:05:02.414 "accel_crypto_key_create", 00:05:02.414 "accel_assign_opc", 00:05:02.414 "accel_get_module_info", 00:05:02.414 "accel_get_opc_assignments", 00:05:02.414 "vmd_rescan", 00:05:02.414 "vmd_remove_device", 00:05:02.414 "vmd_enable", 00:05:02.414 "sock_get_default_impl", 00:05:02.414 "sock_set_default_impl", 00:05:02.414 "sock_impl_set_options", 00:05:02.414 "sock_impl_get_options", 00:05:02.414 "iobuf_get_stats", 00:05:02.414 "iobuf_set_options", 00:05:02.414 "keyring_get_keys", 00:05:02.414 "vfu_tgt_set_base_path", 00:05:02.414 "framework_get_pci_devices", 00:05:02.414 "framework_get_config", 00:05:02.414 "framework_get_subsystems", 00:05:02.414 "fsdev_set_opts", 00:05:02.414 "fsdev_get_opts", 00:05:02.414 "trace_get_info", 00:05:02.414 "trace_get_tpoint_group_mask", 00:05:02.414 "trace_disable_tpoint_group", 00:05:02.414 "trace_enable_tpoint_group", 00:05:02.414 "trace_clear_tpoint_mask", 00:05:02.414 "trace_set_tpoint_mask", 00:05:02.414 "notify_get_notifications", 00:05:02.414 "notify_get_types", 00:05:02.414 "spdk_get_version", 00:05:02.414 "rpc_get_methods" 00:05:02.414 ] 00:05:02.414 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.414 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:02.414 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1898814 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1898814 ']' 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1898814 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898814 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898814' 00:05:02.414 killing process with pid 1898814 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1898814 00:05:02.414 18:16:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1898814 00:05:02.674 00:05:02.674 real 0m1.514s 00:05:02.674 user 0m2.741s 00:05:02.674 sys 0m0.461s 00:05:02.674 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.674 18:16:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:02.674 ************************************ 00:05:02.674 END TEST spdkcli_tcp 00:05:02.674 ************************************ 00:05:02.674 18:16:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.674 18:16:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.674 18:16:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.674 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:05:02.674 ************************************ 00:05:02.674 START TEST dpdk_mem_utility 00:05:02.674 ************************************ 00:05:02.674 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:02.934 * Looking for test storage... 00:05:02.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.934 18:16:57 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.934 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.934 --rc genhtml_branch_coverage=1 00:05:02.935 --rc genhtml_function_coverage=1 00:05:02.935 --rc genhtml_legend=1 00:05:02.935 --rc geninfo_all_blocks=1 00:05:02.935 --rc geninfo_unexecuted_blocks=1 00:05:02.935 00:05:02.935 ' 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.935 --rc genhtml_branch_coverage=1 00:05:02.935 --rc genhtml_function_coverage=1 00:05:02.935 --rc genhtml_legend=1 00:05:02.935 --rc geninfo_all_blocks=1 00:05:02.935 --rc geninfo_unexecuted_blocks=1 00:05:02.935 00:05:02.935 ' 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.935 --rc genhtml_branch_coverage=1 00:05:02.935 --rc genhtml_function_coverage=1 00:05:02.935 --rc genhtml_legend=1 00:05:02.935 --rc geninfo_all_blocks=1 00:05:02.935 --rc geninfo_unexecuted_blocks=1 00:05:02.935 00:05:02.935 ' 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.935 --rc genhtml_branch_coverage=1 00:05:02.935 --rc genhtml_function_coverage=1 00:05:02.935 --rc genhtml_legend=1 00:05:02.935 --rc geninfo_all_blocks=1 00:05:02.935 --rc geninfo_unexecuted_blocks=1 00:05:02.935 00:05:02.935 ' 00:05:02.935 18:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:02.935 18:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1899156 00:05:02.935 18:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1899156 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1899156 ']' 00:05:02.935 18:16:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.935 18:16:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.935 [2024-12-06 18:16:57.702870] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:02.935 [2024-12-06 18:16:57.702946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899156 ] 00:05:03.195 [2024-12-06 18:16:57.791837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.195 [2024-12-06 18:16:57.833391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.763 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.763 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:03.763 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:03.763 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:03.763 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.763 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.763 { 00:05:03.763 "filename": "/tmp/spdk_mem_dump.txt" 00:05:03.763 } 00:05:03.763 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.763 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:03.763 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:03.763 1 heaps totaling size 818.000000 MiB 00:05:03.763 size: 818.000000 MiB heap id: 0 00:05:03.763 end heaps---------- 00:05:03.763 9 mempools totaling size 603.782043 MiB 00:05:03.763 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:03.763 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:03.763 size: 100.555481 MiB name: bdev_io_1899156 00:05:03.763 size: 50.003479 MiB name: msgpool_1899156 00:05:03.763 size: 36.509338 MiB name: fsdev_io_1899156 00:05:03.763 size: 21.763794 MiB name: PDU_Pool 00:05:03.763 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:03.763 size: 4.133484 MiB name: evtpool_1899156 00:05:03.763 size: 0.026123 MiB name: Session_Pool 00:05:03.763 end mempools------- 00:05:03.763 6 memzones totaling size 4.142822 MiB 00:05:03.763 size: 1.000366 MiB name: RG_ring_0_1899156 00:05:03.763 size: 1.000366 MiB name: RG_ring_1_1899156 00:05:03.763 size: 1.000366 MiB name: RG_ring_4_1899156 00:05:03.763 size: 1.000366 MiB name: RG_ring_5_1899156 00:05:03.763 size: 0.125366 MiB name: RG_ring_2_1899156 00:05:03.763 size: 0.015991 MiB name: RG_ring_3_1899156 00:05:03.763 end memzones------- 00:05:03.763 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.023 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:04.023 list of free elements. size: 10.852478 MiB 00:05:04.023 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:04.023 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:04.023 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:04.023 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:04.023 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:04.023 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:04.023 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:04.023 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:04.023 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:04.023 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:04.023 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:04.023 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:04.023 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:04.023 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:04.023 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:04.023 list of standard malloc elements. size: 199.218628 MiB 00:05:04.023 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:04.023 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:04.023 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:04.023 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:04.023 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:04.023 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:04.023 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:04.023 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:04.023 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:04.023 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:04.023 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:04.023 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:04.023 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:04.023 list of memzone associated elements. size: 607.928894 MiB 00:05:04.023 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:04.023 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.023 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:04.023 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.023 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:04.023 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1899156_0 00:05:04.023 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:04.023 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1899156_0 00:05:04.023 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:04.023 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1899156_0 00:05:04.023 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:04.023 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.023 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:04.023 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.023 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:04.023 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1899156_0 00:05:04.023 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:04.023 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1899156 00:05:04.023 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:04.023 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1899156 00:05:04.023 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:04.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.023 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:04.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.023 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:04.023 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.023 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:04.023 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.023 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:04.023 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1899156 00:05:04.023 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:04.023 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1899156 00:05:04.023 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:04.023 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1899156 00:05:04.023 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:04.023 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1899156 00:05:04.023 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:04.023 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1899156 00:05:04.023 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:04.023 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1899156 00:05:04.023 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:04.023 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.023 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:04.023 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.023 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:04.023 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.023 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:04.023 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1899156 00:05:04.023 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:04.023 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1899156 00:05:04.023 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:04.023 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.023 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:04.023 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.023 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:04.023 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1899156 00:05:04.023 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:04.023 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.023 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:04.023 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1899156 00:05:04.023 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:04.024 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1899156 00:05:04.024 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:04.024 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1899156 00:05:04.024 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:04.024 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.024 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.024 18:16:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1899156 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1899156 ']' 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1899156 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1899156 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1899156' 00:05:04.024 killing process with pid 1899156 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1899156 00:05:04.024 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1899156 00:05:04.283 00:05:04.283 real 0m1.401s 00:05:04.283 user 0m1.445s 00:05:04.283 sys 0m0.438s 00:05:04.283 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.283 18:16:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.283 ************************************ 00:05:04.283 END TEST dpdk_mem_utility 00:05:04.283 ************************************ 00:05:04.283 18:16:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.283 18:16:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.283 18:16:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.283 18:16:58 -- common/autotest_common.sh@10 -- # set +x 00:05:04.283 ************************************ 00:05:04.283 START TEST event 00:05:04.283 ************************************ 00:05:04.283 18:16:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:04.283 * Looking for test storage... 00:05:04.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.283 18:16:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.283 18:16:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.283 18:16:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.543 18:16:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.543 18:16:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.543 18:16:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.543 18:16:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.543 18:16:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.543 18:16:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.543 18:16:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.543 18:16:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.543 18:16:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.543 18:16:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.543 18:16:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.543 18:16:59 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.543 18:16:59 event -- scripts/common.sh@345 -- # : 1 00:05:04.543 18:16:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.543 18:16:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.543 18:16:59 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.543 18:16:59 event -- scripts/common.sh@353 -- # local d=1 00:05:04.543 18:16:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.543 18:16:59 event -- scripts/common.sh@355 -- # echo 1 00:05:04.543 18:16:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.543 18:16:59 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.543 18:16:59 event -- scripts/common.sh@353 -- # local d=2 00:05:04.543 18:16:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.543 18:16:59 event -- scripts/common.sh@355 -- # echo 2 00:05:04.543 18:16:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.543 18:16:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.543 18:16:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.543 18:16:59 event -- scripts/common.sh@368 -- # return 0 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.543 --rc genhtml_branch_coverage=1 00:05:04.543 --rc genhtml_function_coverage=1 00:05:04.543 --rc genhtml_legend=1 00:05:04.543 --rc geninfo_all_blocks=1 00:05:04.543 --rc geninfo_unexecuted_blocks=1 00:05:04.543 00:05:04.543 ' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.543 --rc genhtml_branch_coverage=1 00:05:04.543 --rc genhtml_function_coverage=1 00:05:04.543 --rc genhtml_legend=1 00:05:04.543 --rc geninfo_all_blocks=1 00:05:04.543 --rc geninfo_unexecuted_blocks=1 00:05:04.543 00:05:04.543 ' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.543 --rc genhtml_branch_coverage=1 00:05:04.543 --rc genhtml_function_coverage=1 00:05:04.543 --rc genhtml_legend=1 00:05:04.543 --rc geninfo_all_blocks=1 00:05:04.543 --rc geninfo_unexecuted_blocks=1 00:05:04.543 00:05:04.543 ' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.543 --rc genhtml_branch_coverage=1 00:05:04.543 --rc genhtml_function_coverage=1 00:05:04.543 --rc genhtml_legend=1 00:05:04.543 --rc geninfo_all_blocks=1 00:05:04.543 --rc geninfo_unexecuted_blocks=1 00:05:04.543 00:05:04.543 ' 00:05:04.543 18:16:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:04.543 18:16:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.543 18:16:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:04.543 18:16:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.543 18:16:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.543 ************************************ 00:05:04.543 START TEST event_perf 00:05:04.543 ************************************ 00:05:04.543 18:16:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.543 Running I/O for 1 seconds...[2024-12-06 18:16:59.163923] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:04.543 [2024-12-06 18:16:59.164025] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899503 ] 00:05:04.543 [2024-12-06 18:16:59.250623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.543 [2024-12-06 18:16:59.286242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.543 [2024-12-06 18:16:59.286397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.543 [2024-12-06 18:16:59.286548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.543 Running I/O for 1 seconds...[2024-12-06 18:16:59.286549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.534 00:05:05.534 lcore 0: 175834 00:05:05.534 lcore 1: 175837 00:05:05.534 lcore 2: 175838 00:05:05.534 lcore 3: 175838 00:05:05.534 done. 00:05:05.534 00:05:05.534 real 0m1.173s 00:05:05.534 user 0m4.090s 00:05:05.534 sys 0m0.079s 00:05:05.534 18:17:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.534 18:17:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.534 ************************************ 00:05:05.534 END TEST event_perf 00:05:05.534 ************************************ 00:05:05.794 18:17:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.794 18:17:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.794 18:17:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.794 18:17:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.794 ************************************ 00:05:05.794 START TEST event_reactor 00:05:05.794 ************************************ 00:05:05.794 18:17:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:05.794 [2024-12-06 18:17:00.416313] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:05.794 [2024-12-06 18:17:00.416416] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899857 ] 00:05:05.794 [2024-12-06 18:17:00.504716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.794 [2024-12-06 18:17:00.541009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.796 test_start 00:05:06.796 oneshot 00:05:06.796 tick 100 00:05:06.796 tick 100 00:05:06.796 tick 250 00:05:06.796 tick 100 00:05:06.796 tick 100 00:05:06.796 tick 100 00:05:06.796 tick 250 00:05:06.796 tick 500 00:05:06.796 tick 100 00:05:06.796 tick 100 00:05:06.796 tick 250 00:05:06.796 tick 100 00:05:06.796 tick 100 00:05:06.796 test_end 00:05:06.796 00:05:06.796 real 0m1.174s 00:05:06.796 user 0m1.090s 00:05:06.796 sys 0m0.080s 00:05:06.796 18:17:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.796 18:17:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.796 ************************************ 00:05:06.796 END TEST event_reactor 00:05:06.796 ************************************ 00:05:07.067 18:17:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.067 18:17:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.067 18:17:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.067 18:17:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.067 ************************************ 00:05:07.067 START TEST event_reactor_perf 00:05:07.067 ************************************ 00:05:07.067 18:17:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.067 [2024-12-06 18:17:01.667324] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:07.068 [2024-12-06 18:17:01.667411] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900213 ] 00:05:07.068 [2024-12-06 18:17:01.756804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.068 [2024-12-06 18:17:01.786145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.450 test_start 00:05:08.450 test_end 00:05:08.450 Performance: 541570 events per second 00:05:08.450 00:05:08.450 real 0m1.167s 00:05:08.450 user 0m1.088s 00:05:08.450 sys 0m0.076s 00:05:08.450 18:17:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.450 18:17:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 ************************************ 00:05:08.450 END TEST event_reactor_perf 00:05:08.450 ************************************ 00:05:08.450 18:17:02 event -- event/event.sh@49 -- # uname -s 00:05:08.450 18:17:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.450 18:17:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.450 18:17:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.450 18:17:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.450 18:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 ************************************ 00:05:08.450 START TEST event_scheduler 00:05:08.450 ************************************ 00:05:08.450 18:17:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:08.450 * Looking for test storage... 00:05:08.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:08.450 18:17:02 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.450 18:17:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.450 18:17:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.450 18:17:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.450 --rc genhtml_branch_coverage=1 00:05:08.450 --rc genhtml_function_coverage=1 00:05:08.450 --rc genhtml_legend=1 00:05:08.450 --rc geninfo_all_blocks=1 00:05:08.450 --rc geninfo_unexecuted_blocks=1 00:05:08.450 00:05:08.450 ' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.450 --rc genhtml_branch_coverage=1 00:05:08.450 --rc genhtml_function_coverage=1 00:05:08.450 --rc genhtml_legend=1 00:05:08.450 --rc geninfo_all_blocks=1 00:05:08.450 --rc geninfo_unexecuted_blocks=1 00:05:08.450 00:05:08.450 ' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.450 --rc genhtml_branch_coverage=1 00:05:08.450 --rc genhtml_function_coverage=1 00:05:08.450 --rc genhtml_legend=1 00:05:08.450 --rc geninfo_all_blocks=1 00:05:08.450 --rc geninfo_unexecuted_blocks=1 00:05:08.450 00:05:08.450 ' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.450 --rc genhtml_branch_coverage=1 00:05:08.450 --rc genhtml_function_coverage=1 00:05:08.450 --rc genhtml_legend=1 00:05:08.450 --rc geninfo_all_blocks=1 00:05:08.450 --rc geninfo_unexecuted_blocks=1 00:05:08.450 00:05:08.450 ' 00:05:08.450 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.450 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1900532 00:05:08.450 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.450 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1900532 00:05:08.450 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1900532 ']' 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.450 18:17:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 [2024-12-06 18:17:03.147919] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:08.450 [2024-12-06 18:17:03.147999] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900532 ] 00:05:08.710 [2024-12-06 18:17:03.239114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.710 [2024-12-06 18:17:03.294267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.710 [2024-12-06 18:17:03.294429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.710 [2024-12-06 18:17:03.294589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.710 [2024-12-06 18:17:03.294589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.280 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.280 [2024-12-06 18:17:03.973050] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:09.280 [2024-12-06 18:17:03.973070] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.280 [2024-12-06 18:17:03.973080] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.280 [2024-12-06 18:17:03.973086] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.280 [2024-12-06 18:17:03.973091] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.280 18:17:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.280 18:17:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.280 [2024-12-06 18:17:04.036978] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.280 18:17:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.280 18:17:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.280 18:17:04 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.280 18:17:04 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.280 18:17:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 ************************************ 00:05:09.541 START TEST scheduler_create_thread 00:05:09.541 ************************************ 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 2 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 3 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 4 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 5 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 6 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 7 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 8 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.541 9 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.541 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.113 10 00:05:10.113 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.113 18:17:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.113 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.113 18:17:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.497 18:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.497 18:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.497 18:17:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.497 18:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.497 18:17:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.068 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.068 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.068 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.068 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.009 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.009 18:17:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.009 18:17:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.009 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.009 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.579 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.579 00:05:13.579 real 0m4.224s 00:05:13.579 user 0m0.025s 00:05:13.579 sys 0m0.007s 00:05:13.579 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.579 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.579 ************************************ 00:05:13.579 END TEST scheduler_create_thread 00:05:13.580 ************************************ 00:05:13.580 18:17:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:13.580 18:17:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1900532 00:05:13.580 18:17:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1900532 ']' 00:05:13.580 18:17:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1900532 00:05:13.580 18:17:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:13.580 18:17:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.580 18:17:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900532 00:05:13.840 18:17:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.840 18:17:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.840 18:17:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900532' 00:05:13.840 killing process with pid 1900532 00:05:13.840 18:17:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1900532 00:05:13.840 18:17:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1900532 00:05:13.840 [2024-12-06 18:17:08.582675] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:14.101 00:05:14.101 real 0m5.844s 00:05:14.101 user 0m12.927s 00:05:14.101 sys 0m0.434s 00:05:14.101 18:17:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.101 18:17:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.101 ************************************ 00:05:14.101 END TEST event_scheduler 00:05:14.101 ************************************ 00:05:14.101 18:17:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:14.101 18:17:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:14.101 18:17:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.101 18:17:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.101 18:17:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.101 ************************************ 00:05:14.101 START TEST app_repeat 00:05:14.101 ************************************ 00:05:14.101 18:17:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1901667 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1901667' 00:05:14.101 Process app_repeat pid: 1901667 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:14.101 spdk_app_start Round 0 00:05:14.101 18:17:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1901667 /var/tmp/spdk-nbd.sock 00:05:14.101 18:17:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1901667 ']' 00:05:14.102 18:17:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.102 18:17:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.102 18:17:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.102 18:17:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.102 18:17:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.102 [2024-12-06 18:17:08.865345] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:14.102 [2024-12-06 18:17:08.865409] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901667 ] 00:05:14.362 [2024-12-06 18:17:08.951624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.362 [2024-12-06 18:17:08.985712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.362 [2024-12-06 18:17:08.985856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.362 18:17:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.362 18:17:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.362 18:17:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.622 Malloc0 00:05:14.622 18:17:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.882 Malloc1 00:05:14.882 18:17:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.882 /dev/nbd0 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.882 1+0 records in 00:05:14.882 1+0 records out 00:05:14.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215722 s, 19.0 MB/s 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:14.882 18:17:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.882 18:17:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.143 /dev/nbd1 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.143 1+0 records in 00:05:15.143 1+0 records out 00:05:15.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271786 s, 15.1 MB/s 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.143 18:17:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.143 18:17:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.405 { 00:05:15.405 "nbd_device": "/dev/nbd0", 00:05:15.405 "bdev_name": "Malloc0" 00:05:15.405 }, 00:05:15.405 { 00:05:15.405 "nbd_device": "/dev/nbd1", 00:05:15.405 "bdev_name": "Malloc1" 00:05:15.405 } 00:05:15.405 ]' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.405 { 00:05:15.405 "nbd_device": "/dev/nbd0", 00:05:15.405 "bdev_name": "Malloc0" 00:05:15.405 }, 00:05:15.405 { 00:05:15.405 "nbd_device": "/dev/nbd1", 00:05:15.405 "bdev_name": "Malloc1" 00:05:15.405 } 00:05:15.405 ]' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.405 /dev/nbd1' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.405 /dev/nbd1' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.405 256+0 records in 00:05:15.405 256+0 records out 00:05:15.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127347 s, 82.3 MB/s 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.405 256+0 records in 00:05:15.405 256+0 records out 00:05:15.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012511 s, 83.8 MB/s 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.405 256+0 records in 00:05:15.405 256+0 records out 00:05:15.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134945 s, 77.7 MB/s 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.405 18:17:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.666 18:17:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.926 18:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.187 18:17:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.187 18:17:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.448 18:17:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.448 [2024-12-06 18:17:11.089000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.448 [2024-12-06 18:17:11.118435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.448 [2024-12-06 18:17:11.118435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.448 [2024-12-06 18:17:11.147575] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.448 [2024-12-06 18:17:11.147610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.744 18:17:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.744 18:17:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:19.744 spdk_app_start Round 1 00:05:19.744 18:17:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1901667 /var/tmp/spdk-nbd.sock 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1901667 ']' 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.745 18:17:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.745 18:17:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.745 Malloc0 00:05:19.745 18:17:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.005 Malloc1 00:05:20.005 18:17:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.005 18:17:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.006 /dev/nbd0 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.006 18:17:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.006 1+0 records in 00:05:20.006 1+0 records out 00:05:20.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273184 s, 15.0 MB/s 00:05:20.006 18:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.267 18:17:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.267 18:17:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.267 18:17:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.267 18:17:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.267 18:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.267 18:17:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.267 18:17:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.267 /dev/nbd1 00:05:20.267 18:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.267 18:17:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.267 18:17:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.267 18:17:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.267 18:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.267 18:17:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.267 18:17:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.268 1+0 records in 00:05:20.268 1+0 records out 00:05:20.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311892 s, 13.1 MB/s 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.268 18:17:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.268 18:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.268 18:17:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.268 18:17:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.268 18:17:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.268 18:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.527 { 00:05:20.527 "nbd_device": "/dev/nbd0", 00:05:20.527 "bdev_name": "Malloc0" 00:05:20.527 }, 00:05:20.527 { 00:05:20.527 "nbd_device": "/dev/nbd1", 00:05:20.527 "bdev_name": "Malloc1" 00:05:20.527 } 00:05:20.527 ]' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.527 { 00:05:20.527 "nbd_device": "/dev/nbd0", 00:05:20.527 "bdev_name": "Malloc0" 00:05:20.527 }, 00:05:20.527 { 00:05:20.527 "nbd_device": "/dev/nbd1", 00:05:20.527 "bdev_name": "Malloc1" 00:05:20.527 } 00:05:20.527 ]' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.527 /dev/nbd1' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.527 /dev/nbd1' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.527 256+0 records in 00:05:20.527 256+0 records out 00:05:20.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125825 s, 83.3 MB/s 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.527 256+0 records in 00:05:20.527 256+0 records out 00:05:20.527 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121133 s, 86.6 MB/s 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.527 18:17:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.806 256+0 records in 00:05:20.806 256+0 records out 00:05:20.806 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132267 s, 79.3 MB/s 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.806 18:17:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.066 18:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.326 18:17:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.326 18:17:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.587 18:17:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.587 [2024-12-06 18:17:16.241555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.587 [2024-12-06 18:17:16.270685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.587 [2024-12-06 18:17:16.270686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.587 [2024-12-06 18:17:16.300286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.587 [2024-12-06 18:17:16.300318] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.888 18:17:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.888 18:17:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:24.888 spdk_app_start Round 2 00:05:24.888 18:17:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1901667 /var/tmp/spdk-nbd.sock 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1901667 ']' 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.888 18:17:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.888 18:17:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.888 Malloc0 00:05:24.888 18:17:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.149 Malloc1 00:05:25.149 18:17:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.149 18:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.149 /dev/nbd0 00:05:25.409 18:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.409 18:17:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.409 18:17:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.409 18:17:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.409 18:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.409 18:17:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.410 1+0 records in 00:05:25.410 1+0 records out 00:05:25.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266824 s, 15.4 MB/s 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.410 18:17:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.410 18:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.410 18:17:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.410 18:17:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.410 /dev/nbd1 00:05:25.410 18:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.671 1+0 records in 00:05:25.671 1+0 records out 00:05:25.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272809 s, 15.0 MB/s 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.671 18:17:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.671 { 00:05:25.671 "nbd_device": "/dev/nbd0", 00:05:25.671 "bdev_name": "Malloc0" 00:05:25.671 }, 00:05:25.671 { 00:05:25.671 "nbd_device": "/dev/nbd1", 00:05:25.671 "bdev_name": "Malloc1" 00:05:25.671 } 00:05:25.671 ]' 00:05:25.671 18:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.671 { 00:05:25.671 "nbd_device": "/dev/nbd0", 00:05:25.671 "bdev_name": "Malloc0" 00:05:25.671 }, 00:05:25.672 { 00:05:25.672 "nbd_device": "/dev/nbd1", 00:05:25.672 "bdev_name": "Malloc1" 00:05:25.672 } 00:05:25.672 ]' 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.672 /dev/nbd1' 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.672 /dev/nbd1' 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.672 18:17:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.934 256+0 records in 00:05:25.934 256+0 records out 00:05:25.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127092 s, 82.5 MB/s 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.934 256+0 records in 00:05:25.934 256+0 records out 00:05:25.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122338 s, 85.7 MB/s 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.934 256+0 records in 00:05:25.934 256+0 records out 00:05:25.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129274 s, 81.1 MB/s 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.934 18:17:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.196 18:17:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.457 18:17:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.457 18:17:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.716 18:17:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.716 [2024-12-06 18:17:21.409904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.716 [2024-12-06 18:17:21.440225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.716 [2024-12-06 18:17:21.440225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.717 [2024-12-06 18:17:21.469340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.717 [2024-12-06 18:17:21.469370] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.033 18:17:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1901667 /var/tmp/spdk-nbd.sock 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1901667 ']' 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.033 18:17:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1901667 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1901667 ']' 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1901667 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1901667 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1901667' 00:05:30.033 killing process with pid 1901667 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1901667 00:05:30.033 18:17:24 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1901667 00:05:30.034 spdk_app_start is called in Round 0. 00:05:30.034 Shutdown signal received, stop current app iteration 00:05:30.034 Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 reinitialization... 00:05:30.034 spdk_app_start is called in Round 1. 00:05:30.034 Shutdown signal received, stop current app iteration 00:05:30.034 Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 reinitialization... 00:05:30.034 spdk_app_start is called in Round 2. 00:05:30.034 Shutdown signal received, stop current app iteration 00:05:30.034 Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 reinitialization... 00:05:30.034 spdk_app_start is called in Round 3. 00:05:30.034 Shutdown signal received, stop current app iteration 00:05:30.034 18:17:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.034 18:17:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.034 00:05:30.034 real 0m15.846s 00:05:30.034 user 0m34.764s 00:05:30.034 sys 0m2.305s 00:05:30.034 18:17:24 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.034 18:17:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.034 ************************************ 00:05:30.034 END TEST app_repeat 00:05:30.034 ************************************ 00:05:30.034 18:17:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.034 18:17:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.034 18:17:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.034 18:17:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.034 18:17:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.034 ************************************ 00:05:30.034 START TEST cpu_locks 00:05:30.034 ************************************ 00:05:30.034 18:17:24 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:30.295 * Looking for test storage... 00:05:30.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.295 18:17:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.295 --rc genhtml_branch_coverage=1 00:05:30.295 --rc genhtml_function_coverage=1 00:05:30.295 --rc genhtml_legend=1 00:05:30.295 --rc geninfo_all_blocks=1 00:05:30.295 --rc geninfo_unexecuted_blocks=1 00:05:30.295 00:05:30.295 ' 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.295 --rc genhtml_branch_coverage=1 00:05:30.295 --rc genhtml_function_coverage=1 00:05:30.295 --rc genhtml_legend=1 00:05:30.295 --rc geninfo_all_blocks=1 00:05:30.295 --rc geninfo_unexecuted_blocks=1 00:05:30.295 00:05:30.295 ' 00:05:30.295 18:17:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.295 --rc genhtml_branch_coverage=1 00:05:30.295 --rc genhtml_function_coverage=1 00:05:30.296 --rc genhtml_legend=1 00:05:30.296 --rc geninfo_all_blocks=1 00:05:30.296 --rc geninfo_unexecuted_blocks=1 00:05:30.296 00:05:30.296 ' 00:05:30.296 18:17:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.296 --rc genhtml_branch_coverage=1 00:05:30.296 --rc genhtml_function_coverage=1 00:05:30.296 --rc genhtml_legend=1 00:05:30.296 --rc geninfo_all_blocks=1 00:05:30.296 --rc geninfo_unexecuted_blocks=1 00:05:30.296 00:05:30.296 ' 00:05:30.296 18:17:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:30.296 18:17:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:30.296 18:17:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:30.296 18:17:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:30.296 18:17:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.296 18:17:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.296 18:17:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.296 ************************************ 00:05:30.296 START TEST default_locks 00:05:30.296 ************************************ 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1905076 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1905076 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1905076 ']' 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.296 18:17:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.296 [2024-12-06 18:17:25.053597] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:30.296 [2024-12-06 18:17:25.053658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905076 ] 00:05:30.557 [2024-12-06 18:17:25.140758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.557 [2024-12-06 18:17:25.179056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.127 18:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.127 18:17:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:31.127 18:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1905076 00:05:31.127 18:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1905076 00:05:31.127 18:17:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.698 lslocks: write error 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1905076 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1905076 ']' 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1905076 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1905076 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1905076' 00:05:31.698 killing process with pid 1905076 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1905076 00:05:31.698 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1905076 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1905076 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1905076 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1905076 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1905076 ']' 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1905076) - No such process 00:05:31.959 ERROR: process (pid: 1905076) is no longer running 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:31.959 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.960 00:05:31.960 real 0m1.638s 00:05:31.960 user 0m1.787s 00:05:31.960 sys 0m0.560s 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.960 18:17:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.960 ************************************ 00:05:31.960 END TEST default_locks 00:05:31.960 ************************************ 00:05:31.960 18:17:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.960 18:17:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.960 18:17:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.960 18:17:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.960 ************************************ 00:05:31.960 START TEST default_locks_via_rpc 00:05:31.960 ************************************ 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1905443 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1905443 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1905443 ']' 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.960 18:17:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.220 [2024-12-06 18:17:26.757495] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:32.220 [2024-12-06 18:17:26.757555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905443 ] 00:05:32.220 [2024-12-06 18:17:26.845649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.220 [2024-12-06 18:17:26.886195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.791 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.791 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.792 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.052 18:17:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.052 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1905443 00:05:33.052 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1905443 00:05:33.052 18:17:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1905443 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1905443 ']' 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1905443 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.312 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1905443 00:05:33.573 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1905443' 00:05:33.574 killing process with pid 1905443 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1905443 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1905443 00:05:33.574 00:05:33.574 real 0m1.595s 00:05:33.574 user 0m1.718s 00:05:33.574 sys 0m0.568s 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.574 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.574 ************************************ 00:05:33.574 END TEST default_locks_via_rpc 00:05:33.574 ************************************ 00:05:33.574 18:17:28 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.574 18:17:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.574 18:17:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.574 18:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.838 ************************************ 00:05:33.838 START TEST non_locking_app_on_locked_coremask 00:05:33.838 ************************************ 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1905793 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1905793 /var/tmp/spdk.sock 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1905793 ']' 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.838 18:17:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.838 [2024-12-06 18:17:28.429422] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:33.838 [2024-12-06 18:17:28.429478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905793 ] 00:05:33.838 [2024-12-06 18:17:28.513900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.838 [2024-12-06 18:17:28.546068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.778 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.778 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.778 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1906001 00:05:34.778 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1906001 /var/tmp/spdk2.sock 00:05:34.778 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1906001 ']' 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.779 18:17:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.779 [2024-12-06 18:17:29.271994] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:34.779 [2024-12-06 18:17:29.272048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906001 ] 00:05:34.779 [2024-12-06 18:17:29.359241] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.779 [2024-12-06 18:17:29.359261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.779 [2024-12-06 18:17:29.417545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.351 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.351 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.351 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1905793 00:05:35.351 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1905793 00:05:35.351 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.293 lslocks: write error 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1905793 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1905793 ']' 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1905793 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.293 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1905793 00:05:36.553 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.553 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.553 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1905793' 00:05:36.553 killing process with pid 1905793 00:05:36.553 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1905793 00:05:36.553 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1905793 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1906001 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1906001 ']' 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1906001 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906001 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906001' 00:05:36.811 killing process with pid 1906001 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1906001 00:05:36.811 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1906001 00:05:37.072 00:05:37.072 real 0m3.326s 00:05:37.072 user 0m3.674s 00:05:37.072 sys 0m1.061s 00:05:37.072 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.072 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 ************************************ 00:05:37.072 END TEST non_locking_app_on_locked_coremask 00:05:37.072 ************************************ 00:05:37.072 18:17:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:37.072 18:17:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.072 18:17:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.072 18:17:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 ************************************ 00:05:37.072 START TEST locking_app_on_unlocked_coremask 00:05:37.072 ************************************ 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1906511 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1906511 /var/tmp/spdk.sock 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1906511 ']' 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.072 18:17:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.072 [2024-12-06 18:17:31.835729] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:37.072 [2024-12-06 18:17:31.835795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906511 ] 00:05:37.332 [2024-12-06 18:17:31.922927] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.332 [2024-12-06 18:17:31.922956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.332 [2024-12-06 18:17:31.957698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1906712 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1906712 /var/tmp/spdk2.sock 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1906712 ']' 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.901 18:17:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.901 [2024-12-06 18:17:32.657505] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:37.901 [2024-12-06 18:17:32.657556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906712 ] 00:05:38.162 [2024-12-06 18:17:32.741988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.162 [2024-12-06 18:17:32.800115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.735 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.735 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.735 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1906712 00:05:38.735 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1906712 00:05:38.735 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.677 lslocks: write error 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1906511 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1906511 ']' 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1906511 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906511 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906511' 00:05:39.677 killing process with pid 1906511 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1906511 00:05:39.677 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1906511 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1906712 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1906712 ']' 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1906712 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906712 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906712' 00:05:39.938 killing process with pid 1906712 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1906712 00:05:39.938 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1906712 00:05:40.201 00:05:40.201 real 0m3.011s 00:05:40.201 user 0m3.339s 00:05:40.201 sys 0m0.934s 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.201 ************************************ 00:05:40.201 END TEST locking_app_on_unlocked_coremask 00:05:40.201 ************************************ 00:05:40.201 18:17:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:40.201 18:17:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.201 18:17:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.201 18:17:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.201 ************************************ 00:05:40.201 START TEST locking_app_on_locked_coremask 00:05:40.201 ************************************ 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1907094 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1907094 /var/tmp/spdk.sock 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1907094 ']' 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.201 18:17:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.201 [2024-12-06 18:17:34.929888] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:40.201 [2024-12-06 18:17:34.929951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907094 ] 00:05:40.462 [2024-12-06 18:17:35.016767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.462 [2024-12-06 18:17:35.051720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1907418 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1907418 /var/tmp/spdk2.sock 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1907418 /var/tmp/spdk2.sock 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1907418 /var/tmp/spdk2.sock 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1907418 ']' 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.083 18:17:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.083 [2024-12-06 18:17:35.746468] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:41.083 [2024-12-06 18:17:35.746520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907418 ] 00:05:41.083 [2024-12-06 18:17:35.831170] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1907094 has claimed it. 00:05:41.083 [2024-12-06 18:17:35.831200] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1907418) - No such process 00:05:41.694 ERROR: process (pid: 1907418) is no longer running 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1907094 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1907094 00:05:41.694 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.264 lslocks: write error 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1907094 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1907094 ']' 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1907094 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1907094 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1907094' 00:05:42.265 killing process with pid 1907094 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1907094 00:05:42.265 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1907094 00:05:42.265 00:05:42.265 real 0m2.155s 00:05:42.265 user 0m2.413s 00:05:42.265 sys 0m0.599s 00:05:42.265 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.265 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.265 ************************************ 00:05:42.265 END TEST locking_app_on_locked_coremask 00:05:42.265 ************************************ 00:05:42.525 18:17:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.525 18:17:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.525 18:17:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.525 18:17:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.525 ************************************ 00:05:42.525 START TEST locking_overlapped_coremask 00:05:42.525 ************************************ 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1907718 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1907718 /var/tmp/spdk.sock 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1907718 ']' 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.525 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.525 [2024-12-06 18:17:37.154512] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:42.525 [2024-12-06 18:17:37.154567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907718 ] 00:05:42.525 [2024-12-06 18:17:37.241140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.525 [2024-12-06 18:17:37.284044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.525 [2024-12-06 18:17:37.284197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.525 [2024-12-06 18:17:37.284198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1907803 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1907803 /var/tmp/spdk2.sock 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1907803 /var/tmp/spdk2.sock 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1907803 /var/tmp/spdk2.sock 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1907803 ']' 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.467 18:17:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.467 [2024-12-06 18:17:38.014492] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:43.467 [2024-12-06 18:17:38.014546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907803 ] 00:05:43.467 [2024-12-06 18:17:38.136717] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1907718 has claimed it. 00:05:43.467 [2024-12-06 18:17:38.136766] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1907803) - No such process 00:05:44.041 ERROR: process (pid: 1907803) is no longer running 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1907718 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1907718 ']' 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1907718 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1907718 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1907718' 00:05:44.041 killing process with pid 1907718 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1907718 00:05:44.041 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1907718 00:05:44.302 00:05:44.302 real 0m1.787s 00:05:44.302 user 0m5.164s 00:05:44.302 sys 0m0.388s 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.302 ************************************ 00:05:44.302 END TEST locking_overlapped_coremask 00:05:44.302 ************************************ 00:05:44.302 18:17:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:44.302 18:17:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.302 18:17:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.302 18:17:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.302 ************************************ 00:05:44.302 START TEST locking_overlapped_coremask_via_rpc 00:05:44.302 ************************************ 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1908161 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1908161 /var/tmp/spdk.sock 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1908161 ']' 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.302 18:17:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.302 [2024-12-06 18:17:39.018302] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:44.302 [2024-12-06 18:17:39.018355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908161 ] 00:05:44.564 [2024-12-06 18:17:39.099395] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.564 [2024-12-06 18:17:39.099418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.564 [2024-12-06 18:17:39.132679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.564 [2024-12-06 18:17:39.132735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.564 [2024-12-06 18:17:39.132737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.136 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1908183 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1908183 /var/tmp/spdk2.sock 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1908183 ']' 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.137 18:17:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.137 [2024-12-06 18:17:39.856020] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:45.137 [2024-12-06 18:17:39.856071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908183 ] 00:05:45.398 [2024-12-06 18:17:39.968477] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.398 [2024-12-06 18:17:39.968508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.398 [2024-12-06 18:17:40.049404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.398 [2024-12-06 18:17:40.052763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.398 [2024-12-06 18:17:40.052765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.971 [2024-12-06 18:17:40.662725] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1908161 has claimed it. 00:05:45.971 request: 00:05:45.971 { 00:05:45.971 "method": "framework_enable_cpumask_locks", 00:05:45.971 "req_id": 1 00:05:45.971 } 00:05:45.971 Got JSON-RPC error response 00:05:45.971 response: 00:05:45.971 { 00:05:45.971 "code": -32603, 00:05:45.971 "message": "Failed to claim CPU core: 2" 00:05:45.971 } 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1908161 /var/tmp/spdk.sock 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1908161 ']' 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.971 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.233 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.233 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1908183 /var/tmp/spdk2.sock 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1908183 ']' 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.234 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.495 00:05:46.495 real 0m2.075s 00:05:46.495 user 0m0.859s 00:05:46.495 sys 0m0.134s 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.495 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.495 ************************************ 00:05:46.495 END TEST locking_overlapped_coremask_via_rpc 00:05:46.495 ************************************ 00:05:46.495 18:17:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.495 18:17:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1908161 ]] 00:05:46.495 18:17:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1908161 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1908161 ']' 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1908161 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908161 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908161' 00:05:46.495 killing process with pid 1908161 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1908161 00:05:46.495 18:17:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1908161 00:05:46.755 18:17:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1908183 ]] 00:05:46.756 18:17:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1908183 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1908183 ']' 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1908183 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908183 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908183' 00:05:46.756 killing process with pid 1908183 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1908183 00:05:46.756 18:17:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1908183 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1908161 ]] 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1908161 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1908161 ']' 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1908161 00:05:47.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1908161) - No such process 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1908161 is not found' 00:05:47.016 Process with pid 1908161 is not found 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1908183 ]] 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1908183 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1908183 ']' 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1908183 00:05:47.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1908183) - No such process 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1908183 is not found' 00:05:47.016 Process with pid 1908183 is not found 00:05:47.016 18:17:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.016 00:05:47.016 real 0m16.834s 00:05:47.016 user 0m28.922s 00:05:47.016 sys 0m5.171s 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.016 18:17:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.016 ************************************ 00:05:47.016 END TEST cpu_locks 00:05:47.016 ************************************ 00:05:47.016 00:05:47.016 real 0m42.715s 00:05:47.016 user 1m23.168s 00:05:47.016 sys 0m8.572s 00:05:47.016 18:17:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.016 18:17:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.016 ************************************ 00:05:47.016 END TEST event 00:05:47.016 ************************************ 00:05:47.016 18:17:41 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:47.016 18:17:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.016 18:17:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.016 18:17:41 -- common/autotest_common.sh@10 -- # set +x 00:05:47.016 ************************************ 00:05:47.016 START TEST thread 00:05:47.016 ************************************ 00:05:47.017 18:17:41 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:47.017 * Looking for test storage... 00:05:47.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:47.017 18:17:41 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:47.017 18:17:41 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:47.017 18:17:41 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:47.276 18:17:41 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:47.276 18:17:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.276 18:17:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.276 18:17:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.276 18:17:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.276 18:17:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.276 18:17:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.276 18:17:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.276 18:17:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.276 18:17:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.276 18:17:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.276 18:17:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.276 18:17:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:47.276 18:17:41 thread -- scripts/common.sh@345 -- # : 1 00:05:47.277 18:17:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.277 18:17:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.277 18:17:41 thread -- scripts/common.sh@365 -- # decimal 1 00:05:47.277 18:17:41 thread -- scripts/common.sh@353 -- # local d=1 00:05:47.277 18:17:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.277 18:17:41 thread -- scripts/common.sh@355 -- # echo 1 00:05:47.277 18:17:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.277 18:17:41 thread -- scripts/common.sh@366 -- # decimal 2 00:05:47.277 18:17:41 thread -- scripts/common.sh@353 -- # local d=2 00:05:47.277 18:17:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.277 18:17:41 thread -- scripts/common.sh@355 -- # echo 2 00:05:47.277 18:17:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.277 18:17:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.277 18:17:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.277 18:17:41 thread -- scripts/common.sh@368 -- # return 0 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.277 --rc genhtml_branch_coverage=1 00:05:47.277 --rc genhtml_function_coverage=1 00:05:47.277 --rc genhtml_legend=1 00:05:47.277 --rc geninfo_all_blocks=1 00:05:47.277 --rc geninfo_unexecuted_blocks=1 00:05:47.277 00:05:47.277 ' 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.277 --rc genhtml_branch_coverage=1 00:05:47.277 --rc genhtml_function_coverage=1 00:05:47.277 --rc genhtml_legend=1 00:05:47.277 --rc geninfo_all_blocks=1 00:05:47.277 --rc geninfo_unexecuted_blocks=1 00:05:47.277 00:05:47.277 ' 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.277 --rc genhtml_branch_coverage=1 00:05:47.277 --rc genhtml_function_coverage=1 00:05:47.277 --rc genhtml_legend=1 00:05:47.277 --rc geninfo_all_blocks=1 00:05:47.277 --rc geninfo_unexecuted_blocks=1 00:05:47.277 00:05:47.277 ' 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:47.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.277 --rc genhtml_branch_coverage=1 00:05:47.277 --rc genhtml_function_coverage=1 00:05:47.277 --rc genhtml_legend=1 00:05:47.277 --rc geninfo_all_blocks=1 00:05:47.277 --rc geninfo_unexecuted_blocks=1 00:05:47.277 00:05:47.277 ' 00:05:47.277 18:17:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.277 18:17:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.277 ************************************ 00:05:47.277 START TEST thread_poller_perf 00:05:47.277 ************************************ 00:05:47.277 18:17:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.277 [2024-12-06 18:17:41.948207] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:47.277 [2024-12-06 18:17:41.948294] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908729 ] 00:05:47.277 [2024-12-06 18:17:42.036557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.536 [2024-12-06 18:17:42.069540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.536 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.476 [2024-12-06T17:17:43.260Z] ====================================== 00:05:48.476 [2024-12-06T17:17:43.260Z] busy:2406321358 (cyc) 00:05:48.476 [2024-12-06T17:17:43.260Z] total_run_count: 419000 00:05:48.476 [2024-12-06T17:17:43.260Z] tsc_hz: 2400000000 (cyc) 00:05:48.476 [2024-12-06T17:17:43.260Z] ====================================== 00:05:48.476 [2024-12-06T17:17:43.260Z] poller_cost: 5743 (cyc), 2392 (nsec) 00:05:48.476 00:05:48.476 real 0m1.176s 00:05:48.476 user 0m1.093s 00:05:48.476 sys 0m0.079s 00:05:48.476 18:17:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.476 18:17:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 END TEST thread_poller_perf 00:05:48.476 ************************************ 00:05:48.476 18:17:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.476 18:17:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:48.476 18:17:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.476 18:17:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.476 ************************************ 00:05:48.476 START TEST thread_poller_perf 00:05:48.476 ************************************ 00:05:48.476 18:17:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.476 [2024-12-06 18:17:43.201582] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:48.476 [2024-12-06 18:17:43.201721] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908978 ] 00:05:48.736 [2024-12-06 18:17:43.297303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.736 [2024-12-06 18:17:43.329205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.736 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.799 [2024-12-06T17:17:44.583Z] ====================================== 00:05:49.799 [2024-12-06T17:17:44.583Z] busy:2401438660 (cyc) 00:05:49.799 [2024-12-06T17:17:44.583Z] total_run_count: 5096000 00:05:49.799 [2024-12-06T17:17:44.583Z] tsc_hz: 2400000000 (cyc) 00:05:49.799 [2024-12-06T17:17:44.583Z] ====================================== 00:05:49.799 [2024-12-06T17:17:44.583Z] poller_cost: 471 (cyc), 196 (nsec) 00:05:49.799 00:05:49.800 real 0m1.178s 00:05:49.800 user 0m1.097s 00:05:49.800 sys 0m0.076s 00:05:49.800 18:17:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.800 18:17:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 END TEST thread_poller_perf 00:05:49.800 ************************************ 00:05:49.800 18:17:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:49.800 00:05:49.800 real 0m2.701s 00:05:49.800 user 0m2.367s 00:05:49.800 sys 0m0.349s 00:05:49.800 18:17:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.800 18:17:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 END TEST thread 00:05:49.800 ************************************ 00:05:49.800 18:17:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:49.800 18:17:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.800 18:17:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.800 18:17:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.800 18:17:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.800 ************************************ 00:05:49.800 START TEST app_cmdline 00:05:49.800 ************************************ 00:05:49.800 18:17:44 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:49.800 * Looking for test storage... 00:05:49.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:49.800 18:17:44 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.800 18:17:44 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.800 18:17:44 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.060 18:17:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.060 --rc genhtml_branch_coverage=1 00:05:50.060 --rc genhtml_function_coverage=1 00:05:50.060 --rc genhtml_legend=1 00:05:50.060 --rc geninfo_all_blocks=1 00:05:50.060 --rc geninfo_unexecuted_blocks=1 00:05:50.060 00:05:50.060 ' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.060 --rc genhtml_branch_coverage=1 00:05:50.060 --rc genhtml_function_coverage=1 00:05:50.060 --rc genhtml_legend=1 00:05:50.060 --rc geninfo_all_blocks=1 00:05:50.060 --rc geninfo_unexecuted_blocks=1 00:05:50.060 00:05:50.060 ' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.060 --rc genhtml_branch_coverage=1 00:05:50.060 --rc genhtml_function_coverage=1 00:05:50.060 --rc genhtml_legend=1 00:05:50.060 --rc geninfo_all_blocks=1 00:05:50.060 --rc geninfo_unexecuted_blocks=1 00:05:50.060 00:05:50.060 ' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.060 --rc genhtml_branch_coverage=1 00:05:50.060 --rc genhtml_function_coverage=1 00:05:50.060 --rc genhtml_legend=1 00:05:50.060 --rc geninfo_all_blocks=1 00:05:50.060 --rc geninfo_unexecuted_blocks=1 00:05:50.060 00:05:50.060 ' 00:05:50.060 18:17:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.060 18:17:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1909384 00:05:50.060 18:17:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1909384 00:05:50.060 18:17:44 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1909384 ']' 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.060 18:17:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.060 [2024-12-06 18:17:44.730150] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:05:50.060 [2024-12-06 18:17:44.730228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1909384 ] 00:05:50.060 [2024-12-06 18:17:44.817436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.320 [2024-12-06 18:17:44.853225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.889 18:17:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.889 18:17:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:50.889 18:17:45 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:50.889 { 00:05:50.889 "version": "SPDK v25.01-pre git sha1 c2471e450", 00:05:50.889 "fields": { 00:05:50.889 "major": 25, 00:05:50.889 "minor": 1, 00:05:50.889 "patch": 0, 00:05:50.889 "suffix": "-pre", 00:05:50.889 "commit": "c2471e450" 00:05:50.889 } 00:05:50.889 } 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.148 18:17:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.148 18:17:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.149 request: 00:05:51.149 { 00:05:51.149 "method": "env_dpdk_get_mem_stats", 00:05:51.149 "req_id": 1 00:05:51.149 } 00:05:51.149 Got JSON-RPC error response 00:05:51.149 response: 00:05:51.149 { 00:05:51.149 "code": -32601, 00:05:51.149 "message": "Method not found" 00:05:51.149 } 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:51.149 18:17:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1909384 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1909384 ']' 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1909384 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.149 18:17:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1909384 00:05:51.409 18:17:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.409 18:17:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.409 18:17:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1909384' 00:05:51.409 killing process with pid 1909384 00:05:51.409 18:17:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 1909384 00:05:51.409 18:17:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 1909384 00:05:51.409 00:05:51.409 real 0m1.669s 00:05:51.409 user 0m1.965s 00:05:51.409 sys 0m0.463s 00:05:51.409 18:17:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.409 18:17:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.409 ************************************ 00:05:51.409 END TEST app_cmdline 00:05:51.409 ************************************ 00:05:51.409 18:17:46 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.409 18:17:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.409 18:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.409 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.669 ************************************ 00:05:51.669 START TEST version 00:05:51.669 ************************************ 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:51.669 * Looking for test storage... 00:05:51.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.669 18:17:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.669 18:17:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.669 18:17:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.669 18:17:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.669 18:17:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.669 18:17:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.669 18:17:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.669 18:17:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.669 18:17:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.669 18:17:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.669 18:17:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.669 18:17:46 version -- scripts/common.sh@344 -- # case "$op" in 00:05:51.669 18:17:46 version -- scripts/common.sh@345 -- # : 1 00:05:51.669 18:17:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.669 18:17:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.669 18:17:46 version -- scripts/common.sh@365 -- # decimal 1 00:05:51.669 18:17:46 version -- scripts/common.sh@353 -- # local d=1 00:05:51.669 18:17:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.669 18:17:46 version -- scripts/common.sh@355 -- # echo 1 00:05:51.669 18:17:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.669 18:17:46 version -- scripts/common.sh@366 -- # decimal 2 00:05:51.669 18:17:46 version -- scripts/common.sh@353 -- # local d=2 00:05:51.669 18:17:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.669 18:17:46 version -- scripts/common.sh@355 -- # echo 2 00:05:51.669 18:17:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.669 18:17:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.669 18:17:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.669 18:17:46 version -- scripts/common.sh@368 -- # return 0 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 00:05:51.669 ' 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 00:05:51.669 ' 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 00:05:51.669 ' 00:05:51.669 18:17:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.669 --rc genhtml_branch_coverage=1 00:05:51.669 --rc genhtml_function_coverage=1 00:05:51.669 --rc genhtml_legend=1 00:05:51.669 --rc geninfo_all_blocks=1 00:05:51.669 --rc geninfo_unexecuted_blocks=1 00:05:51.669 00:05:51.669 ' 00:05:51.669 18:17:46 version -- app/version.sh@17 -- # get_header_version major 00:05:51.669 18:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.669 18:17:46 version -- app/version.sh@17 -- # major=25 00:05:51.669 18:17:46 version -- app/version.sh@18 -- # get_header_version minor 00:05:51.669 18:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.669 18:17:46 version -- app/version.sh@18 -- # minor=1 00:05:51.669 18:17:46 version -- app/version.sh@19 -- # get_header_version patch 00:05:51.669 18:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.669 18:17:46 version -- app/version.sh@19 -- # patch=0 00:05:51.669 18:17:46 version -- app/version.sh@20 -- # get_header_version suffix 00:05:51.669 18:17:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # tr -d '"' 00:05:51.669 18:17:46 version -- app/version.sh@14 -- # cut -f2 00:05:51.669 18:17:46 version -- app/version.sh@20 -- # suffix=-pre 00:05:51.669 18:17:46 version -- app/version.sh@22 -- # version=25.1 00:05:51.669 18:17:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:51.669 18:17:46 version -- app/version.sh@28 -- # version=25.1rc0 00:05:51.669 18:17:46 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:51.669 18:17:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:51.930 18:17:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:51.930 18:17:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:51.930 00:05:51.930 real 0m0.273s 00:05:51.930 user 0m0.162s 00:05:51.930 sys 0m0.155s 00:05:51.930 18:17:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.930 18:17:46 version -- common/autotest_common.sh@10 -- # set +x 00:05:51.930 ************************************ 00:05:51.930 END TEST version 00:05:51.930 ************************************ 00:05:51.930 18:17:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:51.930 18:17:46 -- spdk/autotest.sh@194 -- # uname -s 00:05:51.930 18:17:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:51.930 18:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.930 18:17:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:51.930 18:17:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:51.930 18:17:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:51.930 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.930 18:17:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:51.930 18:17:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:51.930 18:17:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.930 18:17:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:51.930 18:17:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.930 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.930 ************************************ 00:05:51.930 START TEST nvmf_tcp 00:05:51.930 ************************************ 00:05:51.930 18:17:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:51.930 * Looking for test storage... 00:05:51.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.191 18:17:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.191 --rc genhtml_branch_coverage=1 00:05:52.191 --rc genhtml_function_coverage=1 00:05:52.191 --rc genhtml_legend=1 00:05:52.191 --rc geninfo_all_blocks=1 00:05:52.191 --rc geninfo_unexecuted_blocks=1 00:05:52.191 00:05:52.191 ' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.191 --rc genhtml_branch_coverage=1 00:05:52.191 --rc genhtml_function_coverage=1 00:05:52.191 --rc genhtml_legend=1 00:05:52.191 --rc geninfo_all_blocks=1 00:05:52.191 --rc geninfo_unexecuted_blocks=1 00:05:52.191 00:05:52.191 ' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.191 --rc genhtml_branch_coverage=1 00:05:52.191 --rc genhtml_function_coverage=1 00:05:52.191 --rc genhtml_legend=1 00:05:52.191 --rc geninfo_all_blocks=1 00:05:52.191 --rc geninfo_unexecuted_blocks=1 00:05:52.191 00:05:52.191 ' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.191 --rc genhtml_branch_coverage=1 00:05:52.191 --rc genhtml_function_coverage=1 00:05:52.191 --rc genhtml_legend=1 00:05:52.191 --rc geninfo_all_blocks=1 00:05:52.191 --rc geninfo_unexecuted_blocks=1 00:05:52.191 00:05:52.191 ' 00:05:52.191 18:17:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:52.191 18:17:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:52.191 18:17:46 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.191 18:17:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.191 ************************************ 00:05:52.191 START TEST nvmf_target_core 00:05:52.191 ************************************ 00:05:52.191 18:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:52.191 * Looking for test storage... 00:05:52.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:52.191 18:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.191 18:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.192 18:17:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.454 --rc genhtml_branch_coverage=1 00:05:52.454 --rc genhtml_function_coverage=1 00:05:52.454 --rc genhtml_legend=1 00:05:52.454 --rc geninfo_all_blocks=1 00:05:52.454 --rc geninfo_unexecuted_blocks=1 00:05:52.454 00:05:52.454 ' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.454 --rc genhtml_branch_coverage=1 00:05:52.454 --rc genhtml_function_coverage=1 00:05:52.454 --rc genhtml_legend=1 00:05:52.454 --rc geninfo_all_blocks=1 00:05:52.454 --rc geninfo_unexecuted_blocks=1 00:05:52.454 00:05:52.454 ' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.454 --rc genhtml_branch_coverage=1 00:05:52.454 --rc genhtml_function_coverage=1 00:05:52.454 --rc genhtml_legend=1 00:05:52.454 --rc geninfo_all_blocks=1 00:05:52.454 --rc geninfo_unexecuted_blocks=1 00:05:52.454 00:05:52.454 ' 00:05:52.454 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.454 --rc genhtml_branch_coverage=1 00:05:52.454 --rc genhtml_function_coverage=1 00:05:52.454 --rc genhtml_legend=1 00:05:52.455 --rc geninfo_all_blocks=1 00:05:52.455 --rc geninfo_unexecuted_blocks=1 00:05:52.455 00:05:52.455 ' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:52.455 ************************************ 00:05:52.455 START TEST nvmf_abort 00:05:52.455 ************************************ 00:05:52.455 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:52.718 * Looking for test storage... 00:05:52.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:52.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.718 --rc genhtml_branch_coverage=1 00:05:52.718 --rc genhtml_function_coverage=1 00:05:52.718 --rc genhtml_legend=1 00:05:52.718 --rc geninfo_all_blocks=1 00:05:52.718 --rc geninfo_unexecuted_blocks=1 00:05:52.718 00:05:52.718 ' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:52.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.718 --rc genhtml_branch_coverage=1 00:05:52.718 --rc genhtml_function_coverage=1 00:05:52.718 --rc genhtml_legend=1 00:05:52.718 --rc geninfo_all_blocks=1 00:05:52.718 --rc geninfo_unexecuted_blocks=1 00:05:52.718 00:05:52.718 ' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:52.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.718 --rc genhtml_branch_coverage=1 00:05:52.718 --rc genhtml_function_coverage=1 00:05:52.718 --rc genhtml_legend=1 00:05:52.718 --rc geninfo_all_blocks=1 00:05:52.718 --rc geninfo_unexecuted_blocks=1 00:05:52.718 00:05:52.718 ' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:52.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.718 --rc genhtml_branch_coverage=1 00:05:52.718 --rc genhtml_function_coverage=1 00:05:52.718 --rc genhtml_legend=1 00:05:52.718 --rc geninfo_all_blocks=1 00:05:52.718 --rc geninfo_unexecuted_blocks=1 00:05:52.718 00:05:52.718 ' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.718 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:52.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:52.719 18:17:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:00.874 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:00.875 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:00.875 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:00.875 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:00.875 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:00.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:00.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:06:00.875 00:06:00.875 --- 10.0.0.2 ping statistics --- 00:06:00.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.875 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:00.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:00.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:06:00.875 00:06:00.875 --- 10.0.0.1 ping statistics --- 00:06:00.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:00.875 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1913876 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1913876 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1913876 ']' 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.875 18:17:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.875 [2024-12-06 18:17:54.968055] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:06:00.875 [2024-12-06 18:17:54.968121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.875 [2024-12-06 18:17:55.069207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.875 [2024-12-06 18:17:55.124235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.875 [2024-12-06 18:17:55.124293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.875 [2024-12-06 18:17:55.124302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.875 [2024-12-06 18:17:55.124309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.875 [2024-12-06 18:17:55.124315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.875 [2024-12-06 18:17:55.126177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.875 [2024-12-06 18:17:55.126339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.875 [2024-12-06 18:17:55.126340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.137 [2024-12-06 18:17:55.847272] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.137 Malloc0 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.137 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.137 Delay0 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.138 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.399 [2024-12-06 18:17:55.939106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.399 18:17:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:01.399 [2024-12-06 18:17:56.131858] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:03.946 Initializing NVMe Controllers 00:06:03.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:03.946 controller IO queue size 128 less than required 00:06:03.946 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:03.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:03.946 Initialization complete. Launching workers. 00:06:03.946 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28647 00:06:03.946 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28708, failed to submit 62 00:06:03.946 success 28651, unsuccessful 57, failed 0 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:03.946 rmmod nvme_tcp 00:06:03.946 rmmod nvme_fabrics 00:06:03.946 rmmod nvme_keyring 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1913876 ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1913876 ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913876' 00:06:03.946 killing process with pid 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1913876 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.946 18:17:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:05.905 00:06:05.905 real 0m13.471s 00:06:05.905 user 0m14.283s 00:06:05.905 sys 0m6.636s 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.905 ************************************ 00:06:05.905 END TEST nvmf_abort 00:06:05.905 ************************************ 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.905 18:18:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.166 ************************************ 00:06:06.166 START TEST nvmf_ns_hotplug_stress 00:06:06.166 ************************************ 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.166 * Looking for test storage... 00:06:06.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.166 --rc genhtml_branch_coverage=1 00:06:06.166 --rc genhtml_function_coverage=1 00:06:06.166 --rc genhtml_legend=1 00:06:06.166 --rc geninfo_all_blocks=1 00:06:06.166 --rc geninfo_unexecuted_blocks=1 00:06:06.166 00:06:06.166 ' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.166 --rc genhtml_branch_coverage=1 00:06:06.166 --rc genhtml_function_coverage=1 00:06:06.166 --rc genhtml_legend=1 00:06:06.166 --rc geninfo_all_blocks=1 00:06:06.166 --rc geninfo_unexecuted_blocks=1 00:06:06.166 00:06:06.166 ' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.166 --rc genhtml_branch_coverage=1 00:06:06.166 --rc genhtml_function_coverage=1 00:06:06.166 --rc genhtml_legend=1 00:06:06.166 --rc geninfo_all_blocks=1 00:06:06.166 --rc geninfo_unexecuted_blocks=1 00:06:06.166 00:06:06.166 ' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.166 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.166 --rc genhtml_branch_coverage=1 00:06:06.166 --rc genhtml_function_coverage=1 00:06:06.166 --rc genhtml_legend=1 00:06:06.166 --rc geninfo_all_blocks=1 00:06:06.166 --rc geninfo_unexecuted_blocks=1 00:06:06.166 00:06:06.166 ' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.166 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.427 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:06.427 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:06.427 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:06.427 18:18:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.572 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:14.573 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:14.573 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:14.573 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:14.573 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.573 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.614 ms 00:06:14.574 00:06:14.574 --- 10.0.0.2 ping statistics --- 00:06:14.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.574 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:06:14.574 00:06:14.574 --- 10.0.0.1 ping statistics --- 00:06:14.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.574 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1919020 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1919020 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1919020 ']' 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.574 18:18:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.574 [2024-12-06 18:18:08.583074] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:06:14.574 [2024-12-06 18:18:08.583138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.574 [2024-12-06 18:18:08.684418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.574 [2024-12-06 18:18:08.735727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.574 [2024-12-06 18:18:08.735782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.574 [2024-12-06 18:18:08.735792] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.574 [2024-12-06 18:18:08.735799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.574 [2024-12-06 18:18:08.735805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.574 [2024-12-06 18:18:08.737878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.574 [2024-12-06 18:18:08.738040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.574 [2024-12-06 18:18:08.738042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:14.836 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:15.098 [2024-12-06 18:18:09.627956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.098 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:15.098 18:18:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.360 [2024-12-06 18:18:10.035159] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.360 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.621 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.882 Malloc0 00:06:15.882 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.143 Delay0 00:06:16.143 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.143 18:18:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:16.405 NULL1 00:06:16.405 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:16.666 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1919874 00:06:16.667 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:16.667 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:16.667 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.928 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.928 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:16.928 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:17.190 true 00:06:17.190 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:17.190 18:18:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.450 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.450 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:17.450 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:17.710 true 00:06:17.710 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:17.710 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.971 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.971 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:17.971 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:18.231 true 00:06:18.231 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:18.231 18:18:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.492 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.753 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:18.753 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:18.753 true 00:06:18.753 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:18.753 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.013 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.273 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:19.273 18:18:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:19.273 true 00:06:19.273 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:19.273 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.532 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.791 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:19.791 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:19.791 true 00:06:19.791 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:19.791 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.050 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.309 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:20.309 18:18:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:20.309 true 00:06:20.309 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:20.309 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.569 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.829 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:20.829 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:20.829 true 00:06:21.089 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:21.089 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.089 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.350 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.350 18:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:21.609 true 00:06:21.609 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:21.609 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.609 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.868 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:21.868 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.127 true 00:06:22.127 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:22.127 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.127 18:18:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.386 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:22.386 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:22.645 true 00:06:22.645 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:22.645 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.903 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.903 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:22.903 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:23.163 true 00:06:23.163 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:23.163 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.424 18:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.424 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:23.424 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:23.685 true 00:06:23.685 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:23.685 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.946 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.205 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:24.205 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:24.205 true 00:06:24.205 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:24.205 18:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.466 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.729 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:24.729 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:24.729 true 00:06:25.003 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:25.003 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.003 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.264 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:25.264 18:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:25.264 true 00:06:25.525 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:25.525 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.525 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.787 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:25.787 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:26.048 true 00:06:26.048 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:26.048 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.048 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.307 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:26.307 18:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:26.566 true 00:06:26.566 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:26.566 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.826 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.826 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:26.826 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:27.101 true 00:06:27.101 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:27.101 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.362 18:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.362 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:27.362 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:27.623 true 00:06:27.623 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:27.623 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.884 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.884 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:27.884 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:28.145 true 00:06:28.145 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:28.145 18:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.407 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.407 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:28.407 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:28.668 true 00:06:28.668 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:28.668 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.929 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.189 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:29.189 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.189 true 00:06:29.189 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:29.189 18:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.449 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.711 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:29.711 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:29.711 true 00:06:29.711 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:29.711 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.971 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.233 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:30.233 18:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:30.233 true 00:06:30.493 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:30.493 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.493 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.753 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:30.753 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:31.013 true 00:06:31.013 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:31.013 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.013 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.273 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:31.273 18:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:31.535 true 00:06:31.535 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:31.535 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.535 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.796 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:31.796 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:32.057 true 00:06:32.057 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:32.057 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.057 18:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.318 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:32.318 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:32.578 true 00:06:32.578 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:32.578 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.838 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.838 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:32.838 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:33.098 true 00:06:33.098 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:33.098 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.359 18:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.359 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:33.359 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:33.619 true 00:06:33.619 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:33.619 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.881 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.881 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:33.881 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:34.141 true 00:06:34.141 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:34.141 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.401 18:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.401 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:34.401 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:34.661 true 00:06:34.661 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:34.661 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.921 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.182 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:35.182 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:35.182 true 00:06:35.182 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:35.182 18:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.443 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.704 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:35.704 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:35.704 true 00:06:35.704 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:35.704 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.964 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.224 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:36.224 18:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:36.224 true 00:06:36.484 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:36.484 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.484 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.746 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:36.746 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:37.006 true 00:06:37.006 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:37.006 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.006 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.266 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:37.266 18:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:37.526 true 00:06:37.526 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:37.526 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.787 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.787 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:37.787 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:38.046 true 00:06:38.046 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:38.046 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.305 18:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.305 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:38.305 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:38.566 true 00:06:38.566 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:38.566 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.825 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.085 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:39.085 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:39.085 true 00:06:39.085 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:39.085 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.345 18:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.604 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:39.604 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:39.604 true 00:06:39.604 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:39.604 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.864 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.123 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:40.123 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:40.123 true 00:06:40.383 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:40.383 18:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.383 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.642 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:40.642 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:40.903 true 00:06:40.903 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:40.903 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.903 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.164 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:41.164 18:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:41.426 true 00:06:41.426 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:41.426 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.686 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.686 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:41.686 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:41.946 true 00:06:41.946 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:41.946 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.206 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.206 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:42.206 18:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:42.465 true 00:06:42.465 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:42.466 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.726 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.726 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:42.726 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:42.987 true 00:06:42.987 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:42.987 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.248 18:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.248 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:43.248 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:43.508 true 00:06:43.508 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:43.508 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.769 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.030 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:44.030 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:44.030 true 00:06:44.030 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:44.030 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.290 18:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.550 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:44.550 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:44.550 true 00:06:44.551 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:44.551 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.811 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.072 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:45.072 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:45.072 true 00:06:45.333 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:45.333 18:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.333 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.593 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:45.593 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:45.854 true 00:06:45.854 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:45.854 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.854 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.114 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:46.115 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:46.376 true 00:06:46.376 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:46.376 18:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.636 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.636 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:46.636 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:46.897 true 00:06:46.897 Initializing NVMe Controllers 00:06:46.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.897 Controller IO queue size 128, less than required. 00:06:46.897 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:46.897 Initialization complete. Launching workers. 00:06:46.897 ======================================================== 00:06:46.897 Latency(us) 00:06:46.897 Device Information : IOPS MiB/s Average min max 00:06:46.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30943.53 15.11 4136.42 1140.48 8343.74 00:06:46.897 ======================================================== 00:06:46.897 Total : 30943.53 15.11 4136.42 1140.48 8343.74 00:06:46.897 00:06:46.897 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1919874 00:06:46.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1919874) - No such process 00:06:46.897 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1919874 00:06:46.897 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.157 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.419 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:47.419 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:47.419 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:47.419 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.419 18:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:47.419 null0 00:06:47.419 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.419 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.419 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:47.680 null1 00:06:47.680 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.680 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.680 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:47.941 null2 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:47.941 null3 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.941 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:48.202 null4 00:06:48.202 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.202 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.202 18:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:48.462 null5 00:06:48.462 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.463 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.463 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:48.463 null6 00:06:48.463 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.463 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.463 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:48.724 null7 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.724 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1926606 1926608 1926612 1926614 1926617 1926620 1926623 1926625 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.725 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.986 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.248 18:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.248 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.248 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.248 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.510 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.771 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.772 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.772 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.772 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.772 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.772 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.032 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.033 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.033 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.033 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.033 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.295 18:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.295 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.295 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.295 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.557 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.818 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.082 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.343 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.344 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.344 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.604 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.864 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.125 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.387 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.648 rmmod nvme_tcp 00:06:52.648 rmmod nvme_fabrics 00:06:52.648 rmmod nvme_keyring 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1919020 ']' 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1919020 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1919020 ']' 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1919020 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.648 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1919020 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1919020' 00:06:52.908 killing process with pid 1919020 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1919020 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1919020 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.908 18:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.458 00:06:55.458 real 0m48.936s 00:06:55.458 user 3m19.446s 00:06:55.458 sys 0m17.358s 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.458 ************************************ 00:06:55.458 END TEST nvmf_ns_hotplug_stress 00:06:55.458 ************************************ 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.458 ************************************ 00:06:55.458 START TEST nvmf_delete_subsystem 00:06:55.458 ************************************ 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.458 * Looking for test storage... 00:06:55.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.458 --rc genhtml_branch_coverage=1 00:06:55.458 --rc genhtml_function_coverage=1 00:06:55.458 --rc genhtml_legend=1 00:06:55.458 --rc geninfo_all_blocks=1 00:06:55.458 --rc geninfo_unexecuted_blocks=1 00:06:55.458 00:06:55.458 ' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.458 --rc genhtml_branch_coverage=1 00:06:55.458 --rc genhtml_function_coverage=1 00:06:55.458 --rc genhtml_legend=1 00:06:55.458 --rc geninfo_all_blocks=1 00:06:55.458 --rc geninfo_unexecuted_blocks=1 00:06:55.458 00:06:55.458 ' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.458 --rc genhtml_branch_coverage=1 00:06:55.458 --rc genhtml_function_coverage=1 00:06:55.458 --rc genhtml_legend=1 00:06:55.458 --rc geninfo_all_blocks=1 00:06:55.458 --rc geninfo_unexecuted_blocks=1 00:06:55.458 00:06:55.458 ' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.458 --rc genhtml_branch_coverage=1 00:06:55.458 --rc genhtml_function_coverage=1 00:06:55.458 --rc genhtml_legend=1 00:06:55.458 --rc geninfo_all_blocks=1 00:06:55.458 --rc geninfo_unexecuted_blocks=1 00:06:55.458 00:06:55.458 ' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.458 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.459 18:18:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:03.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:03.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:03.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:03.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.605 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:03.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:07:03.606 00:07:03.606 --- 10.0.0.2 ping statistics --- 00:07:03.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.606 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:07:03.606 00:07:03.606 --- 10.0.0.1 ping statistics --- 00:07:03.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.606 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1931907 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1931907 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1931907 ']' 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.606 18:18:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.606 [2024-12-06 18:18:57.539386] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:07:03.606 [2024-12-06 18:18:57.539447] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.606 [2024-12-06 18:18:57.639277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.606 [2024-12-06 18:18:57.692060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.606 [2024-12-06 18:18:57.692116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.606 [2024-12-06 18:18:57.692125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.606 [2024-12-06 18:18:57.692132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.606 [2024-12-06 18:18:57.692139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.606 [2024-12-06 18:18:57.693799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.606 [2024-12-06 18:18:57.693924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.606 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.606 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:03.606 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.606 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.606 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 [2024-12-06 18:18:58.410802] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 [2024-12-06 18:18:58.435110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 NULL1 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 Delay0 00:07:03.866 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1932074 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:03.867 18:18:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:03.867 [2024-12-06 18:18:58.562189] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.781 18:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.781 18:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.781 18:19:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 [2024-12-06 18:19:00.607024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15132c0 is same with the state(6) to be set 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 [2024-12-06 18:19:00.608411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1513680 is same with the state(6) to be set 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 starting I/O failed: -6 00:07:06.043 Write completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 [2024-12-06 18:19:00.613350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e58000c40 is same with the state(6) to be set 00:07:06.043 starting I/O failed: -6 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.043 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Write completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.044 Read completed with error (sct=0, sc=8) 00:07:06.986 [2024-12-06 18:19:01.578489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15149b0 is same with the state(6) to be set 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Write completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.986 Write completed with error (sct=0, sc=8) 00:07:06.986 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 [2024-12-06 18:19:01.610795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15134a0 is same with the state(6) to be set 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 [2024-12-06 18:19:01.610951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1513860 is same with the state(6) to be set 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 [2024-12-06 18:19:01.614906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e5800d020 is same with the state(6) to be set 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 Write completed with error (sct=0, sc=8) 00:07:06.987 Read completed with error (sct=0, sc=8) 00:07:06.987 [2024-12-06 18:19:01.615018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7e5800d7c0 is same with the state(6) to be set 00:07:06.987 Initializing NVMe Controllers 00:07:06.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.987 Controller IO queue size 128, less than required. 00:07:06.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.987 Initialization complete. Launching workers. 00:07:06.987 ======================================================== 00:07:06.987 Latency(us) 00:07:06.987 Device Information : IOPS MiB/s Average min max 00:07:06.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.78 0.08 911447.12 728.81 1006986.56 00:07:06.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.80 0.08 926859.78 331.44 1012360.93 00:07:06.987 ======================================================== 00:07:06.987 Total : 319.59 0.16 919057.42 331.44 1012360.93 00:07:06.987 00:07:06.987 [2024-12-06 18:19:01.615729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15149b0 (9): Bad file descriptor 00:07:06.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:06.987 18:19:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.987 18:19:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:06.987 18:19:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1932074 00:07:06.987 18:19:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1932074 00:07:07.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1932074) - No such process 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1932074 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1932074 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1932074 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 [2024-12-06 18:19:02.144849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1932849 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:07.559 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.559 [2024-12-06 18:19:02.244332] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:08.130 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.130 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:08.130 18:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.391 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.391 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:08.391 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.985 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.985 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:08.985 18:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.556 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.556 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:09.556 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.202 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.202 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:10.202 18:19:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.491 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.491 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:10.491 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.786 Initializing NVMe Controllers 00:07:10.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:10.786 Controller IO queue size 128, less than required. 00:07:10.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:10.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:10.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:10.786 Initialization complete. Launching workers. 00:07:10.786 ======================================================== 00:07:10.786 Latency(us) 00:07:10.786 Device Information : IOPS MiB/s Average min max 00:07:10.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001762.26 1000143.63 1005155.17 00:07:10.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003441.67 1000280.30 1008996.51 00:07:10.786 ======================================================== 00:07:10.786 Total : 256.00 0.12 1002601.97 1000143.63 1008996.51 00:07:10.786 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1932849 00:07:11.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1932849) - No such process 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1932849 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.105 rmmod nvme_tcp 00:07:11.105 rmmod nvme_fabrics 00:07:11.105 rmmod nvme_keyring 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1931907 ']' 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1931907 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1931907 ']' 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1931907 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1931907 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1931907' 00:07:11.105 killing process with pid 1931907 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1931907 00:07:11.105 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1931907 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.378 18:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.289 00:07:13.289 real 0m18.304s 00:07:13.289 user 0m30.531s 00:07:13.289 sys 0m6.755s 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.289 ************************************ 00:07:13.289 END TEST nvmf_delete_subsystem 00:07:13.289 ************************************ 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.289 18:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.551 ************************************ 00:07:13.551 START TEST nvmf_host_management 00:07:13.551 ************************************ 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.551 * Looking for test storage... 00:07:13.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.551 --rc genhtml_branch_coverage=1 00:07:13.551 --rc genhtml_function_coverage=1 00:07:13.551 --rc genhtml_legend=1 00:07:13.551 --rc geninfo_all_blocks=1 00:07:13.551 --rc geninfo_unexecuted_blocks=1 00:07:13.551 00:07:13.551 ' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.551 --rc genhtml_branch_coverage=1 00:07:13.551 --rc genhtml_function_coverage=1 00:07:13.551 --rc genhtml_legend=1 00:07:13.551 --rc geninfo_all_blocks=1 00:07:13.551 --rc geninfo_unexecuted_blocks=1 00:07:13.551 00:07:13.551 ' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.551 --rc genhtml_branch_coverage=1 00:07:13.551 --rc genhtml_function_coverage=1 00:07:13.551 --rc genhtml_legend=1 00:07:13.551 --rc geninfo_all_blocks=1 00:07:13.551 --rc geninfo_unexecuted_blocks=1 00:07:13.551 00:07:13.551 ' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.551 --rc genhtml_branch_coverage=1 00:07:13.551 --rc genhtml_function_coverage=1 00:07:13.551 --rc genhtml_legend=1 00:07:13.551 --rc geninfo_all_blocks=1 00:07:13.551 --rc geninfo_unexecuted_blocks=1 00:07:13.551 00:07:13.551 ' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.551 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.552 18:19:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:21.693 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:21.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:21.694 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:21.694 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:21.694 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:21.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:07:21.694 00:07:21.694 --- 10.0.0.2 ping statistics --- 00:07:21.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.694 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:07:21.694 00:07:21.694 --- 10.0.0.1 ping statistics --- 00:07:21.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.694 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1937753 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1937753 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1937753 ']' 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.694 18:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.694 [2024-12-06 18:19:15.838367] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:07:21.694 [2024-12-06 18:19:15.838434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.694 [2024-12-06 18:19:15.937616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.694 [2024-12-06 18:19:15.991104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.694 [2024-12-06 18:19:15.991167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.694 [2024-12-06 18:19:15.991177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.694 [2024-12-06 18:19:15.991185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.694 [2024-12-06 18:19:15.991191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.694 [2024-12-06 18:19:15.993593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.694 [2024-12-06 18:19:15.993757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.694 [2024-12-06 18:19:15.993926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.694 [2024-12-06 18:19:15.993927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.954 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.955 [2024-12-06 18:19:16.713429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:21.955 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.216 Malloc0 00:07:22.216 [2024-12-06 18:19:16.791032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1938018 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1938018 /var/tmp/bdevperf.sock 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1938018 ']' 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:22.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:22.216 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:22.216 { 00:07:22.216 "params": { 00:07:22.216 "name": "Nvme$subsystem", 00:07:22.216 "trtype": "$TEST_TRANSPORT", 00:07:22.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.216 "adrfam": "ipv4", 00:07:22.216 "trsvcid": "$NVMF_PORT", 00:07:22.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.216 "hdgst": ${hdgst:-false}, 00:07:22.217 "ddgst": ${ddgst:-false} 00:07:22.217 }, 00:07:22.217 "method": "bdev_nvme_attach_controller" 00:07:22.217 } 00:07:22.217 EOF 00:07:22.217 )") 00:07:22.217 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:22.217 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:22.217 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:22.217 18:19:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:22.217 "params": { 00:07:22.217 "name": "Nvme0", 00:07:22.217 "trtype": "tcp", 00:07:22.217 "traddr": "10.0.0.2", 00:07:22.217 "adrfam": "ipv4", 00:07:22.217 "trsvcid": "4420", 00:07:22.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:22.217 "hdgst": false, 00:07:22.217 "ddgst": false 00:07:22.217 }, 00:07:22.217 "method": "bdev_nvme_attach_controller" 00:07:22.217 }' 00:07:22.217 [2024-12-06 18:19:16.900089] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:07:22.217 [2024-12-06 18:19:16.900156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938018 ] 00:07:22.217 [2024-12-06 18:19:16.993989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.478 [2024-12-06 18:19:17.047904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.739 Running I/O for 10 seconds... 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:23.000 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.001 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=618 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 618 -ge 100 ']' 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.263 [2024-12-06 18:19:17.806971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.807125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203c940 is same with the state(6) to be set 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.263 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.263 [2024-12-06 18:19:17.816886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.263 [2024-12-06 18:19:17.816945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.816957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.263 [2024-12-06 18:19:17.816965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.816975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.263 [2024-12-06 18:19:17.816984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.816994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:23.263 [2024-12-06 18:19:17.817013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x908c20 is same with the state(6) to be set 00:07:23.263 [2024-12-06 18:19:17.817124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.263 [2024-12-06 18:19:17.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.263 [2024-12-06 18:19:17.817495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.817986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.817995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.264 [2024-12-06 18:19:17.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.264 [2024-12-06 18:19:17.818260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.818270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.265 [2024-12-06 18:19:17.818280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.818291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.265 [2024-12-06 18:19:17.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.818308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.265 [2024-12-06 18:19:17.818318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.818328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.265 [2024-12-06 18:19:17.818336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.818346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:23.265 [2024-12-06 18:19:17.818353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:23.265 [2024-12-06 18:19:17.819632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:23.265 task offset: 91904 on job bdev=Nvme0n1 fails 00:07:23.265 00:07:23.265 Latency(us) 00:07:23.265 [2024-12-06T17:19:18.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.265 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:23.265 Job: Nvme0n1 ended in about 0.48 seconds with error 00:07:23.265 Verification LBA range: start 0x0 length 0x400 00:07:23.265 Nvme0n1 : 0.48 1463.48 91.47 133.04 0.00 38971.05 1966.08 36700.16 00:07:23.265 [2024-12-06T17:19:18.049Z] =================================================================================================================== 00:07:23.265 [2024-12-06T17:19:18.049Z] Total : 1463.48 91.47 133.04 0.00 38971.05 1966.08 36700.16 00:07:23.265 [2024-12-06 18:19:17.821855] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.265 [2024-12-06 18:19:17.821894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908c20 (9): Bad file descriptor 00:07:23.265 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.265 18:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:23.265 [2024-12-06 18:19:17.915858] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1938018 00:07:24.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1938018) - No such process 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:24.205 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:24.206 { 00:07:24.206 "params": { 00:07:24.206 "name": "Nvme$subsystem", 00:07:24.206 "trtype": "$TEST_TRANSPORT", 00:07:24.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:24.206 "adrfam": "ipv4", 00:07:24.206 "trsvcid": "$NVMF_PORT", 00:07:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:24.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:24.206 "hdgst": ${hdgst:-false}, 00:07:24.206 "ddgst": ${ddgst:-false} 00:07:24.206 }, 00:07:24.206 "method": "bdev_nvme_attach_controller" 00:07:24.206 } 00:07:24.206 EOF 00:07:24.206 )") 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:24.206 18:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:24.206 "params": { 00:07:24.206 "name": "Nvme0", 00:07:24.206 "trtype": "tcp", 00:07:24.206 "traddr": "10.0.0.2", 00:07:24.206 "adrfam": "ipv4", 00:07:24.206 "trsvcid": "4420", 00:07:24.206 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.206 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:24.206 "hdgst": false, 00:07:24.206 "ddgst": false 00:07:24.206 }, 00:07:24.206 "method": "bdev_nvme_attach_controller" 00:07:24.206 }' 00:07:24.206 [2024-12-06 18:19:18.885058] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:07:24.206 [2024-12-06 18:19:18.885111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938373 ] 00:07:24.206 [2024-12-06 18:19:18.972650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.467 [2024-12-06 18:19:19.008190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.467 Running I/O for 1 seconds... 00:07:25.668 1583.00 IOPS, 98.94 MiB/s 00:07:25.668 Latency(us) 00:07:25.668 [2024-12-06T17:19:20.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.668 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:25.668 Verification LBA range: start 0x0 length 0x400 00:07:25.668 Nvme0n1 : 1.04 1605.55 100.35 0.00 0.00 39178.26 6471.68 32331.09 00:07:25.668 [2024-12-06T17:19:20.452Z] =================================================================================================================== 00:07:25.668 [2024-12-06T17:19:20.452Z] Total : 1605.55 100.35 0.00 0.00 39178.26 6471.68 32331.09 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.668 rmmod nvme_tcp 00:07:25.668 rmmod nvme_fabrics 00:07:25.668 rmmod nvme_keyring 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1937753 ']' 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1937753 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1937753 ']' 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1937753 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.668 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1937753 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1937753' 00:07:25.928 killing process with pid 1937753 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1937753 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1937753 00:07:25.928 [2024-12-06 18:19:20.548716] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.928 18:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:28.471 00:07:28.471 real 0m14.557s 00:07:28.471 user 0m22.957s 00:07:28.471 sys 0m6.826s 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:28.471 ************************************ 00:07:28.471 END TEST nvmf_host_management 00:07:28.471 ************************************ 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.471 ************************************ 00:07:28.471 START TEST nvmf_lvol 00:07:28.471 ************************************ 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:28.471 * Looking for test storage... 00:07:28.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.471 --rc genhtml_branch_coverage=1 00:07:28.471 --rc genhtml_function_coverage=1 00:07:28.471 --rc genhtml_legend=1 00:07:28.471 --rc geninfo_all_blocks=1 00:07:28.471 --rc geninfo_unexecuted_blocks=1 00:07:28.471 00:07:28.471 ' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.471 --rc genhtml_branch_coverage=1 00:07:28.471 --rc genhtml_function_coverage=1 00:07:28.471 --rc genhtml_legend=1 00:07:28.471 --rc geninfo_all_blocks=1 00:07:28.471 --rc geninfo_unexecuted_blocks=1 00:07:28.471 00:07:28.471 ' 00:07:28.471 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.471 --rc genhtml_branch_coverage=1 00:07:28.471 --rc genhtml_function_coverage=1 00:07:28.472 --rc genhtml_legend=1 00:07:28.472 --rc geninfo_all_blocks=1 00:07:28.472 --rc geninfo_unexecuted_blocks=1 00:07:28.472 00:07:28.472 ' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.472 --rc genhtml_branch_coverage=1 00:07:28.472 --rc genhtml_function_coverage=1 00:07:28.472 --rc genhtml_legend=1 00:07:28.472 --rc geninfo_all_blocks=1 00:07:28.472 --rc geninfo_unexecuted_blocks=1 00:07:28.472 00:07:28.472 ' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.472 18:19:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.616 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:36.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:36.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:36.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:36.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:36.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:07:36.617 00:07:36.617 --- 10.0.0.2 ping statistics --- 00:07:36.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.617 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:07:36.617 00:07:36.617 --- 10.0.0.1 ping statistics --- 00:07:36.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.617 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1943067 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1943067 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1943067 ']' 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.617 18:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 [2024-12-06 18:19:30.492434] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:07:36.617 [2024-12-06 18:19:30.492496] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.617 [2024-12-06 18:19:30.591181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.617 [2024-12-06 18:19:30.643875] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.617 [2024-12-06 18:19:30.643927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.617 [2024-12-06 18:19:30.643935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.617 [2024-12-06 18:19:30.643943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.617 [2024-12-06 18:19:30.643949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.617 [2024-12-06 18:19:30.645990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.617 [2024-12-06 18:19:30.646151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.617 [2024-12-06 18:19:30.646152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.617 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.878 [2024-12-06 18:19:31.527427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.878 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:37.139 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:37.139 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:37.399 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:37.399 18:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:37.660 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:37.660 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4132a688-563f-469a-bef5-0c87a657b92f 00:07:37.660 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4132a688-563f-469a-bef5-0c87a657b92f lvol 20 00:07:37.921 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e2dab39e-466e-4776-a40e-1c6ccb879576 00:07:37.921 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.182 18:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e2dab39e-466e-4776-a40e-1c6ccb879576 00:07:38.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.443 [2024-12-06 18:19:33.190624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.443 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.704 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1943672 00:07:38.704 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:38.704 18:19:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:39.645 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e2dab39e-466e-4776-a40e-1c6ccb879576 MY_SNAPSHOT 00:07:39.906 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6228ebfd-51c3-4b42-8270-1ed5cf581a27 00:07:39.906 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e2dab39e-466e-4776-a40e-1c6ccb879576 30 00:07:40.167 18:19:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6228ebfd-51c3-4b42-8270-1ed5cf581a27 MY_CLONE 00:07:40.429 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=298a2fe7-7282-46ad-b565-04153bcbe197 00:07:40.429 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 298a2fe7-7282-46ad-b565-04153bcbe197 00:07:40.689 18:19:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1943672 00:07:50.681 Initializing NVMe Controllers 00:07:50.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.681 Controller IO queue size 128, less than required. 00:07:50.681 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:50.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:50.681 Initialization complete. Launching workers. 00:07:50.681 ======================================================== 00:07:50.681 Latency(us) 00:07:50.681 Device Information : IOPS MiB/s Average min max 00:07:50.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16024.10 62.59 7990.56 1562.85 56877.99 00:07:50.681 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16933.30 66.15 7561.90 1160.47 50217.47 00:07:50.681 ======================================================== 00:07:50.681 Total : 32957.40 128.74 7770.32 1160.47 56877.99 00:07:50.681 00:07:50.681 18:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2dab39e-466e-4776-a40e-1c6ccb879576 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4132a688-563f-469a-bef5-0c87a657b92f 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.681 rmmod nvme_tcp 00:07:50.681 rmmod nvme_fabrics 00:07:50.681 rmmod nvme_keyring 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1943067 ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1943067 ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1943067' 00:07:50.681 killing process with pid 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1943067 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.681 18:19:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.064 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.064 00:07:52.065 real 0m24.053s 00:07:52.065 user 1m5.299s 00:07:52.065 sys 0m8.689s 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.065 ************************************ 00:07:52.065 END TEST nvmf_lvol 00:07:52.065 ************************************ 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.065 18:19:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.327 ************************************ 00:07:52.327 START TEST nvmf_lvs_grow 00:07:52.327 ************************************ 00:07:52.327 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:52.327 * Looking for test storage... 00:07:52.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.327 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:52.327 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:52.327 18:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:52.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.327 --rc genhtml_branch_coverage=1 00:07:52.327 --rc genhtml_function_coverage=1 00:07:52.327 --rc genhtml_legend=1 00:07:52.327 --rc geninfo_all_blocks=1 00:07:52.327 --rc geninfo_unexecuted_blocks=1 00:07:52.327 00:07:52.327 ' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:52.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.327 --rc genhtml_branch_coverage=1 00:07:52.327 --rc genhtml_function_coverage=1 00:07:52.327 --rc genhtml_legend=1 00:07:52.327 --rc geninfo_all_blocks=1 00:07:52.327 --rc geninfo_unexecuted_blocks=1 00:07:52.327 00:07:52.327 ' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:52.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.327 --rc genhtml_branch_coverage=1 00:07:52.327 --rc genhtml_function_coverage=1 00:07:52.327 --rc genhtml_legend=1 00:07:52.327 --rc geninfo_all_blocks=1 00:07:52.327 --rc geninfo_unexecuted_blocks=1 00:07:52.327 00:07:52.327 ' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:52.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.327 --rc genhtml_branch_coverage=1 00:07:52.327 --rc genhtml_function_coverage=1 00:07:52.327 --rc genhtml_legend=1 00:07:52.327 --rc geninfo_all_blocks=1 00:07:52.327 --rc geninfo_unexecuted_blocks=1 00:07:52.327 00:07:52.327 ' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.327 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.588 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.589 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.589 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:52.589 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:52.589 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:52.589 18:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:00.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:00.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:00.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:00.731 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.731 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:08:00.732 00:08:00.732 --- 10.0.0.2 ping statistics --- 00:08:00.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.732 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:08:00.732 00:08:00.732 --- 10.0.0.1 ping statistics --- 00:08:00.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.732 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1950138 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1950138 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1950138 ']' 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.732 18:19:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.732 [2024-12-06 18:19:54.677694] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:00.732 [2024-12-06 18:19:54.677759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.732 [2024-12-06 18:19:54.773936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.732 [2024-12-06 18:19:54.824680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.732 [2024-12-06 18:19:54.824731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.732 [2024-12-06 18:19:54.824741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.732 [2024-12-06 18:19:54.824748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.732 [2024-12-06 18:19:54.824755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.732 [2024-12-06 18:19:54.825493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.732 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.732 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:00.732 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.732 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.732 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:00.992 [2024-12-06 18:19:55.709122] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:00.992 ************************************ 00:08:00.992 START TEST lvs_grow_clean 00:08:00.992 ************************************ 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:00.992 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.253 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.253 18:19:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.253 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:01.253 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:01.514 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:01.514 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:01.514 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:01.801 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:01.801 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:01.801 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 494838c9-817b-4dea-b5a9-cc65352b0f43 lvol 150 00:08:02.158 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b5b00275-8fa8-4e48-a37b-ab968a7a6a82 00:08:02.158 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:02.158 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:02.158 [2024-12-06 18:19:56.740279] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:02.158 [2024-12-06 18:19:56.740352] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:02.158 true 00:08:02.158 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:02.158 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:02.449 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:02.449 18:19:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.449 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5b00275-8fa8-4e48-a37b-ab968a7a6a82 00:08:02.708 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:02.968 [2024-12-06 18:19:57.502685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1950835 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1950835 /var/tmp/bdevperf.sock 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1950835 ']' 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.968 18:19:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.968 [2024-12-06 18:19:57.742190] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:02.968 [2024-12-06 18:19:57.742258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950835 ] 00:08:03.227 [2024-12-06 18:19:57.831802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.227 [2024-12-06 18:19:57.883626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.167 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.167 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:04.167 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:04.167 Nvme0n1 00:08:04.167 18:19:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:04.429 [ 00:08:04.429 { 00:08:04.429 "name": "Nvme0n1", 00:08:04.429 "aliases": [ 00:08:04.429 "b5b00275-8fa8-4e48-a37b-ab968a7a6a82" 00:08:04.429 ], 00:08:04.430 "product_name": "NVMe disk", 00:08:04.430 "block_size": 4096, 00:08:04.430 "num_blocks": 38912, 00:08:04.430 "uuid": "b5b00275-8fa8-4e48-a37b-ab968a7a6a82", 00:08:04.430 "numa_id": 0, 00:08:04.430 "assigned_rate_limits": { 00:08:04.430 "rw_ios_per_sec": 0, 00:08:04.430 "rw_mbytes_per_sec": 0, 00:08:04.430 "r_mbytes_per_sec": 0, 00:08:04.430 "w_mbytes_per_sec": 0 00:08:04.430 }, 00:08:04.430 "claimed": false, 00:08:04.430 "zoned": false, 00:08:04.430 "supported_io_types": { 00:08:04.430 "read": true, 00:08:04.430 "write": true, 00:08:04.430 "unmap": true, 00:08:04.430 "flush": true, 00:08:04.430 "reset": true, 00:08:04.430 "nvme_admin": true, 00:08:04.430 "nvme_io": true, 00:08:04.430 "nvme_io_md": false, 00:08:04.430 "write_zeroes": true, 00:08:04.430 "zcopy": false, 00:08:04.430 "get_zone_info": false, 00:08:04.430 "zone_management": false, 00:08:04.430 "zone_append": false, 00:08:04.430 "compare": true, 00:08:04.430 "compare_and_write": true, 00:08:04.430 "abort": true, 00:08:04.430 "seek_hole": false, 00:08:04.430 "seek_data": false, 00:08:04.430 "copy": true, 00:08:04.430 "nvme_iov_md": false 00:08:04.430 }, 00:08:04.430 "memory_domains": [ 00:08:04.430 { 00:08:04.430 "dma_device_id": "system", 00:08:04.430 "dma_device_type": 1 00:08:04.430 } 00:08:04.430 ], 00:08:04.430 "driver_specific": { 00:08:04.430 "nvme": [ 00:08:04.430 { 00:08:04.430 "trid": { 00:08:04.430 "trtype": "TCP", 00:08:04.430 "adrfam": "IPv4", 00:08:04.430 "traddr": "10.0.0.2", 00:08:04.430 "trsvcid": "4420", 00:08:04.430 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:04.430 }, 00:08:04.430 "ctrlr_data": { 00:08:04.430 "cntlid": 1, 00:08:04.430 "vendor_id": "0x8086", 00:08:04.430 "model_number": "SPDK bdev Controller", 00:08:04.430 "serial_number": "SPDK0", 00:08:04.430 "firmware_revision": "25.01", 00:08:04.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.430 "oacs": { 00:08:04.430 "security": 0, 00:08:04.430 "format": 0, 00:08:04.430 "firmware": 0, 00:08:04.430 "ns_manage": 0 00:08:04.430 }, 00:08:04.430 "multi_ctrlr": true, 00:08:04.430 "ana_reporting": false 00:08:04.430 }, 00:08:04.430 "vs": { 00:08:04.430 "nvme_version": "1.3" 00:08:04.430 }, 00:08:04.430 "ns_data": { 00:08:04.430 "id": 1, 00:08:04.430 "can_share": true 00:08:04.430 } 00:08:04.430 } 00:08:04.430 ], 00:08:04.430 "mp_policy": "active_passive" 00:08:04.430 } 00:08:04.430 } 00:08:04.430 ] 00:08:04.430 18:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1951034 00:08:04.430 18:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:04.430 18:19:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:04.691 Running I/O for 10 seconds... 00:08:05.632 Latency(us) 00:08:05.632 [2024-12-06T17:20:00.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.632 Nvme0n1 : 1.00 25198.00 98.43 0.00 0.00 0.00 0.00 0.00 00:08:05.632 [2024-12-06T17:20:00.416Z] =================================================================================================================== 00:08:05.632 [2024-12-06T17:20:00.416Z] Total : 25198.00 98.43 0.00 0.00 0.00 0.00 0.00 00:08:05.632 00:08:06.575 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:06.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.575 Nvme0n1 : 2.00 25375.00 99.12 0.00 0.00 0.00 0.00 0.00 00:08:06.575 [2024-12-06T17:20:01.359Z] =================================================================================================================== 00:08:06.575 [2024-12-06T17:20:01.359Z] Total : 25375.00 99.12 0.00 0.00 0.00 0.00 0.00 00:08:06.575 00:08:06.575 true 00:08:06.575 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:06.575 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.834 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.834 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.834 18:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1951034 00:08:07.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.790 Nvme0n1 : 3.00 25461.00 99.46 0.00 0.00 0.00 0.00 0.00 00:08:07.790 [2024-12-06T17:20:02.574Z] =================================================================================================================== 00:08:07.790 [2024-12-06T17:20:02.574Z] Total : 25461.00 99.46 0.00 0.00 0.00 0.00 0.00 00:08:07.790 00:08:08.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.730 Nvme0n1 : 4.00 25511.00 99.65 0.00 0.00 0.00 0.00 0.00 00:08:08.730 [2024-12-06T17:20:03.514Z] =================================================================================================================== 00:08:08.730 [2024-12-06T17:20:03.514Z] Total : 25511.00 99.65 0.00 0.00 0.00 0.00 0.00 00:08:08.731 00:08:09.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.672 Nvme0n1 : 5.00 25554.40 99.82 0.00 0.00 0.00 0.00 0.00 00:08:09.672 [2024-12-06T17:20:04.456Z] =================================================================================================================== 00:08:09.672 [2024-12-06T17:20:04.456Z] Total : 25554.40 99.82 0.00 0.00 0.00 0.00 0.00 00:08:09.672 00:08:10.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.614 Nvme0n1 : 6.00 25583.33 99.93 0.00 0.00 0.00 0.00 0.00 00:08:10.614 [2024-12-06T17:20:05.398Z] =================================================================================================================== 00:08:10.614 [2024-12-06T17:20:05.398Z] Total : 25583.33 99.93 0.00 0.00 0.00 0.00 0.00 00:08:10.614 00:08:11.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.554 Nvme0n1 : 7.00 25604.00 100.02 0.00 0.00 0.00 0.00 0.00 00:08:11.554 [2024-12-06T17:20:06.338Z] =================================================================================================================== 00:08:11.554 [2024-12-06T17:20:06.338Z] Total : 25604.00 100.02 0.00 0.00 0.00 0.00 0.00 00:08:11.554 00:08:12.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.496 Nvme0n1 : 8.00 25621.25 100.08 0.00 0.00 0.00 0.00 0.00 00:08:12.496 [2024-12-06T17:20:07.280Z] =================================================================================================================== 00:08:12.496 [2024-12-06T17:20:07.280Z] Total : 25621.25 100.08 0.00 0.00 0.00 0.00 0.00 00:08:12.496 00:08:13.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.881 Nvme0n1 : 9.00 25636.67 100.14 0.00 0.00 0.00 0.00 0.00 00:08:13.881 [2024-12-06T17:20:08.665Z] =================================================================================================================== 00:08:13.881 [2024-12-06T17:20:08.665Z] Total : 25636.67 100.14 0.00 0.00 0.00 0.00 0.00 00:08:13.881 00:08:14.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.825 Nvme0n1 : 10.00 25645.70 100.18 0.00 0.00 0.00 0.00 0.00 00:08:14.825 [2024-12-06T17:20:09.609Z] =================================================================================================================== 00:08:14.825 [2024-12-06T17:20:09.609Z] Total : 25645.70 100.18 0.00 0.00 0.00 0.00 0.00 00:08:14.825 00:08:14.825 00:08:14.825 Latency(us) 00:08:14.825 [2024-12-06T17:20:09.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.825 Nvme0n1 : 10.00 25650.17 100.20 0.00 0.00 4986.94 2157.23 8628.91 00:08:14.825 [2024-12-06T17:20:09.609Z] =================================================================================================================== 00:08:14.825 [2024-12-06T17:20:09.609Z] Total : 25650.17 100.20 0.00 0.00 4986.94 2157.23 8628.91 00:08:14.825 { 00:08:14.825 "results": [ 00:08:14.825 { 00:08:14.825 "job": "Nvme0n1", 00:08:14.825 "core_mask": "0x2", 00:08:14.825 "workload": "randwrite", 00:08:14.825 "status": "finished", 00:08:14.825 "queue_depth": 128, 00:08:14.825 "io_size": 4096, 00:08:14.825 "runtime": 10.003248, 00:08:14.825 "iops": 25650.168825165587, 00:08:14.825 "mibps": 100.19597197330307, 00:08:14.825 "io_failed": 0, 00:08:14.825 "io_timeout": 0, 00:08:14.825 "avg_latency_us": 4986.942430669499, 00:08:14.825 "min_latency_us": 2157.2266666666665, 00:08:14.825 "max_latency_us": 8628.906666666666 00:08:14.825 } 00:08:14.825 ], 00:08:14.825 "core_count": 1 00:08:14.825 } 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1950835 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1950835 ']' 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1950835 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1950835 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1950835' 00:08:14.825 killing process with pid 1950835 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1950835 00:08:14.825 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.825 00:08:14.825 Latency(us) 00:08:14.825 [2024-12-06T17:20:09.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.825 [2024-12-06T17:20:09.609Z] =================================================================================================================== 00:08:14.825 [2024-12-06T17:20:09.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1950835 00:08:14.825 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:15.086 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.086 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:15.086 18:20:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:15.346 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:15.346 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:15.346 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:15.606 [2024-12-06 18:20:10.161245] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:15.606 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:15.607 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:15.607 request: 00:08:15.607 { 00:08:15.607 "uuid": "494838c9-817b-4dea-b5a9-cc65352b0f43", 00:08:15.607 "method": "bdev_lvol_get_lvstores", 00:08:15.607 "req_id": 1 00:08:15.607 } 00:08:15.607 Got JSON-RPC error response 00:08:15.607 response: 00:08:15.607 { 00:08:15.607 "code": -19, 00:08:15.607 "message": "No such device" 00:08:15.607 } 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:15.867 aio_bdev 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b5b00275-8fa8-4e48-a37b-ab968a7a6a82 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b5b00275-8fa8-4e48-a37b-ab968a7a6a82 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.867 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:16.128 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b5b00275-8fa8-4e48-a37b-ab968a7a6a82 -t 2000 00:08:16.128 [ 00:08:16.128 { 00:08:16.128 "name": "b5b00275-8fa8-4e48-a37b-ab968a7a6a82", 00:08:16.128 "aliases": [ 00:08:16.128 "lvs/lvol" 00:08:16.128 ], 00:08:16.128 "product_name": "Logical Volume", 00:08:16.128 "block_size": 4096, 00:08:16.128 "num_blocks": 38912, 00:08:16.128 "uuid": "b5b00275-8fa8-4e48-a37b-ab968a7a6a82", 00:08:16.128 "assigned_rate_limits": { 00:08:16.128 "rw_ios_per_sec": 0, 00:08:16.128 "rw_mbytes_per_sec": 0, 00:08:16.128 "r_mbytes_per_sec": 0, 00:08:16.128 "w_mbytes_per_sec": 0 00:08:16.128 }, 00:08:16.128 "claimed": false, 00:08:16.128 "zoned": false, 00:08:16.128 "supported_io_types": { 00:08:16.128 "read": true, 00:08:16.128 "write": true, 00:08:16.128 "unmap": true, 00:08:16.128 "flush": false, 00:08:16.128 "reset": true, 00:08:16.128 "nvme_admin": false, 00:08:16.128 "nvme_io": false, 00:08:16.128 "nvme_io_md": false, 00:08:16.128 "write_zeroes": true, 00:08:16.128 "zcopy": false, 00:08:16.128 "get_zone_info": false, 00:08:16.128 "zone_management": false, 00:08:16.128 "zone_append": false, 00:08:16.128 "compare": false, 00:08:16.128 "compare_and_write": false, 00:08:16.128 "abort": false, 00:08:16.128 "seek_hole": true, 00:08:16.128 "seek_data": true, 00:08:16.128 "copy": false, 00:08:16.128 "nvme_iov_md": false 00:08:16.128 }, 00:08:16.128 "driver_specific": { 00:08:16.128 "lvol": { 00:08:16.128 "lvol_store_uuid": "494838c9-817b-4dea-b5a9-cc65352b0f43", 00:08:16.128 "base_bdev": "aio_bdev", 00:08:16.128 "thin_provision": false, 00:08:16.128 "num_allocated_clusters": 38, 00:08:16.128 "snapshot": false, 00:08:16.128 "clone": false, 00:08:16.128 "esnap_clone": false 00:08:16.128 } 00:08:16.128 } 00:08:16.128 } 00:08:16.128 ] 00:08:16.128 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:16.128 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:16.128 18:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:16.389 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:16.389 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:16.389 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:16.650 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:16.650 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b5b00275-8fa8-4e48-a37b-ab968a7a6a82 00:08:16.650 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 494838c9-817b-4dea-b5a9-cc65352b0f43 00:08:16.910 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.170 00:08:17.170 real 0m16.026s 00:08:17.170 user 0m15.692s 00:08:17.170 sys 0m1.459s 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 ************************************ 00:08:17.170 END TEST lvs_grow_clean 00:08:17.170 ************************************ 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.170 ************************************ 00:08:17.170 START TEST lvs_grow_dirty 00:08:17.170 ************************************ 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.170 18:20:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.431 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.431 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=374b7d59-08ca-4608-8777-88cc2a083020 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:17.692 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 374b7d59-08ca-4608-8777-88cc2a083020 lvol 150 00:08:17.952 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:17.952 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:17.952 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:18.213 [2024-12-06 18:20:12.746245] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:18.213 [2024-12-06 18:20:12.746287] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:18.213 true 00:08:18.213 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:18.213 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.213 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.213 18:20:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.472 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:18.472 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:18.732 [2024-12-06 18:20:13.400125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.732 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1953956 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1953956 /var/tmp/bdevperf.sock 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1953956 ']' 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:18.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.993 18:20:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.993 [2024-12-06 18:20:13.621547] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:18.993 [2024-12-06 18:20:13.621598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1953956 ] 00:08:18.993 [2024-12-06 18:20:13.703146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.993 [2024-12-06 18:20:13.733098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.935 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.935 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:19.935 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:19.935 Nvme0n1 00:08:19.935 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:20.196 [ 00:08:20.196 { 00:08:20.196 "name": "Nvme0n1", 00:08:20.196 "aliases": [ 00:08:20.196 "b6f91a7c-3497-4262-8b89-ea75126a5355" 00:08:20.196 ], 00:08:20.196 "product_name": "NVMe disk", 00:08:20.196 "block_size": 4096, 00:08:20.196 "num_blocks": 38912, 00:08:20.196 "uuid": "b6f91a7c-3497-4262-8b89-ea75126a5355", 00:08:20.196 "numa_id": 0, 00:08:20.196 "assigned_rate_limits": { 00:08:20.196 "rw_ios_per_sec": 0, 00:08:20.196 "rw_mbytes_per_sec": 0, 00:08:20.196 "r_mbytes_per_sec": 0, 00:08:20.196 "w_mbytes_per_sec": 0 00:08:20.196 }, 00:08:20.196 "claimed": false, 00:08:20.196 "zoned": false, 00:08:20.196 "supported_io_types": { 00:08:20.196 "read": true, 00:08:20.196 "write": true, 00:08:20.196 "unmap": true, 00:08:20.196 "flush": true, 00:08:20.196 "reset": true, 00:08:20.196 "nvme_admin": true, 00:08:20.196 "nvme_io": true, 00:08:20.196 "nvme_io_md": false, 00:08:20.196 "write_zeroes": true, 00:08:20.196 "zcopy": false, 00:08:20.196 "get_zone_info": false, 00:08:20.196 "zone_management": false, 00:08:20.196 "zone_append": false, 00:08:20.196 "compare": true, 00:08:20.196 "compare_and_write": true, 00:08:20.196 "abort": true, 00:08:20.196 "seek_hole": false, 00:08:20.196 "seek_data": false, 00:08:20.196 "copy": true, 00:08:20.196 "nvme_iov_md": false 00:08:20.196 }, 00:08:20.196 "memory_domains": [ 00:08:20.196 { 00:08:20.196 "dma_device_id": "system", 00:08:20.196 "dma_device_type": 1 00:08:20.196 } 00:08:20.196 ], 00:08:20.196 "driver_specific": { 00:08:20.196 "nvme": [ 00:08:20.196 { 00:08:20.196 "trid": { 00:08:20.196 "trtype": "TCP", 00:08:20.196 "adrfam": "IPv4", 00:08:20.196 "traddr": "10.0.0.2", 00:08:20.196 "trsvcid": "4420", 00:08:20.196 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:20.196 }, 00:08:20.196 "ctrlr_data": { 00:08:20.196 "cntlid": 1, 00:08:20.196 "vendor_id": "0x8086", 00:08:20.196 "model_number": "SPDK bdev Controller", 00:08:20.196 "serial_number": "SPDK0", 00:08:20.196 "firmware_revision": "25.01", 00:08:20.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.196 "oacs": { 00:08:20.196 "security": 0, 00:08:20.196 "format": 0, 00:08:20.196 "firmware": 0, 00:08:20.196 "ns_manage": 0 00:08:20.196 }, 00:08:20.196 "multi_ctrlr": true, 00:08:20.196 "ana_reporting": false 00:08:20.196 }, 00:08:20.196 "vs": { 00:08:20.196 "nvme_version": "1.3" 00:08:20.196 }, 00:08:20.196 "ns_data": { 00:08:20.196 "id": 1, 00:08:20.196 "can_share": true 00:08:20.196 } 00:08:20.196 } 00:08:20.196 ], 00:08:20.196 "mp_policy": "active_passive" 00:08:20.196 } 00:08:20.196 } 00:08:20.196 ] 00:08:20.196 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1954292 00:08:20.196 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:20.196 18:20:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.196 Running I/O for 10 seconds... 00:08:21.147 Latency(us) 00:08:21.147 [2024-12-06T17:20:15.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.147 Nvme0n1 : 1.00 25304.00 98.84 0.00 0.00 0.00 0.00 0.00 00:08:21.147 [2024-12-06T17:20:15.931Z] =================================================================================================================== 00:08:21.147 [2024-12-06T17:20:15.931Z] Total : 25304.00 98.84 0.00 0.00 0.00 0.00 0.00 00:08:21.147 00:08:22.088 18:20:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:22.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.349 Nvme0n1 : 2.00 25420.00 99.30 0.00 0.00 0.00 0.00 0.00 00:08:22.349 [2024-12-06T17:20:17.133Z] =================================================================================================================== 00:08:22.349 [2024-12-06T17:20:17.133Z] Total : 25420.00 99.30 0.00 0.00 0.00 0.00 0.00 00:08:22.349 00:08:22.349 true 00:08:22.349 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:22.349 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:22.611 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:22.611 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:22.611 18:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1954292 00:08:23.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.183 Nvme0n1 : 3.00 25479.33 99.53 0.00 0.00 0.00 0.00 0.00 00:08:23.183 [2024-12-06T17:20:17.967Z] =================================================================================================================== 00:08:23.183 [2024-12-06T17:20:17.967Z] Total : 25479.33 99.53 0.00 0.00 0.00 0.00 0.00 00:08:23.183 00:08:24.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.568 Nvme0n1 : 4.00 25529.50 99.72 0.00 0.00 0.00 0.00 0.00 00:08:24.568 [2024-12-06T17:20:19.352Z] =================================================================================================================== 00:08:24.568 [2024-12-06T17:20:19.352Z] Total : 25529.50 99.72 0.00 0.00 0.00 0.00 0.00 00:08:24.568 00:08:25.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.139 Nvme0n1 : 5.00 25565.60 99.87 0.00 0.00 0.00 0.00 0.00 00:08:25.139 [2024-12-06T17:20:19.923Z] =================================================================================================================== 00:08:25.139 [2024-12-06T17:20:19.923Z] Total : 25565.60 99.87 0.00 0.00 0.00 0.00 0.00 00:08:25.139 00:08:26.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.527 Nvme0n1 : 6.00 25581.00 99.93 0.00 0.00 0.00 0.00 0.00 00:08:26.527 [2024-12-06T17:20:21.311Z] =================================================================================================================== 00:08:26.527 [2024-12-06T17:20:21.311Z] Total : 25581.00 99.93 0.00 0.00 0.00 0.00 0.00 00:08:26.527 00:08:27.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.470 Nvme0n1 : 7.00 25601.86 100.01 0.00 0.00 0.00 0.00 0.00 00:08:27.470 [2024-12-06T17:20:22.254Z] =================================================================================================================== 00:08:27.470 [2024-12-06T17:20:22.254Z] Total : 25601.86 100.01 0.00 0.00 0.00 0.00 0.00 00:08:27.470 00:08:28.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.412 Nvme0n1 : 8.00 25617.25 100.07 0.00 0.00 0.00 0.00 0.00 00:08:28.412 [2024-12-06T17:20:23.196Z] =================================================================================================================== 00:08:28.412 [2024-12-06T17:20:23.196Z] Total : 25617.25 100.07 0.00 0.00 0.00 0.00 0.00 00:08:28.412 00:08:29.357 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.357 Nvme0n1 : 9.00 25631.44 100.12 0.00 0.00 0.00 0.00 0.00 00:08:29.357 [2024-12-06T17:20:24.141Z] =================================================================================================================== 00:08:29.357 [2024-12-06T17:20:24.141Z] Total : 25631.44 100.12 0.00 0.00 0.00 0.00 0.00 00:08:29.357 00:08:30.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.301 Nvme0n1 : 10.00 25645.70 100.18 0.00 0.00 0.00 0.00 0.00 00:08:30.301 [2024-12-06T17:20:25.085Z] =================================================================================================================== 00:08:30.301 [2024-12-06T17:20:25.085Z] Total : 25645.70 100.18 0.00 0.00 0.00 0.00 0.00 00:08:30.301 00:08:30.301 00:08:30.301 Latency(us) 00:08:30.301 [2024-12-06T17:20:25.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.301 Nvme0n1 : 10.00 25646.82 100.18 0.00 0.00 4987.70 1870.51 8628.91 00:08:30.301 [2024-12-06T17:20:25.085Z] =================================================================================================================== 00:08:30.301 [2024-12-06T17:20:25.085Z] Total : 25646.82 100.18 0.00 0.00 4987.70 1870.51 8628.91 00:08:30.301 { 00:08:30.301 "results": [ 00:08:30.301 { 00:08:30.301 "job": "Nvme0n1", 00:08:30.301 "core_mask": "0x2", 00:08:30.301 "workload": "randwrite", 00:08:30.301 "status": "finished", 00:08:30.301 "queue_depth": 128, 00:08:30.301 "io_size": 4096, 00:08:30.301 "runtime": 10.004555, 00:08:30.301 "iops": 25646.817874458186, 00:08:30.301 "mibps": 100.18288232210229, 00:08:30.301 "io_failed": 0, 00:08:30.302 "io_timeout": 0, 00:08:30.302 "avg_latency_us": 4987.701203421868, 00:08:30.302 "min_latency_us": 1870.5066666666667, 00:08:30.302 "max_latency_us": 8628.906666666666 00:08:30.302 } 00:08:30.302 ], 00:08:30.302 "core_count": 1 00:08:30.302 } 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1953956 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1953956 ']' 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1953956 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.302 18:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1953956 00:08:30.302 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:30.302 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:30.302 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1953956' 00:08:30.302 killing process with pid 1953956 00:08:30.302 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1953956 00:08:30.302 Received shutdown signal, test time was about 10.000000 seconds 00:08:30.302 00:08:30.302 Latency(us) 00:08:30.302 [2024-12-06T17:20:25.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.302 [2024-12-06T17:20:25.086Z] =================================================================================================================== 00:08:30.302 [2024-12-06T17:20:25.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:30.302 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1953956 00:08:30.563 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:30.563 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:30.823 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:30.823 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1950138 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1950138 00:08:31.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1950138 Killed "${NVMF_APP[@]}" "$@" 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1956326 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1956326 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1956326 ']' 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.084 18:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.084 [2024-12-06 18:20:25.704854] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:31.084 [2024-12-06 18:20:25.704907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.084 [2024-12-06 18:20:25.795311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.084 [2024-12-06 18:20:25.823962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.084 [2024-12-06 18:20:25.823991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.084 [2024-12-06 18:20:25.823997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.084 [2024-12-06 18:20:25.824002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.084 [2024-12-06 18:20:25.824006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.084 [2024-12-06 18:20:25.824449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.024 [2024-12-06 18:20:26.710839] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:32.024 [2024-12-06 18:20:26.710912] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:32.024 [2024-12-06 18:20:26.710934] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.024 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:32.285 18:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6f91a7c-3497-4262-8b89-ea75126a5355 -t 2000 00:08:32.285 [ 00:08:32.285 { 00:08:32.285 "name": "b6f91a7c-3497-4262-8b89-ea75126a5355", 00:08:32.285 "aliases": [ 00:08:32.285 "lvs/lvol" 00:08:32.285 ], 00:08:32.285 "product_name": "Logical Volume", 00:08:32.285 "block_size": 4096, 00:08:32.285 "num_blocks": 38912, 00:08:32.285 "uuid": "b6f91a7c-3497-4262-8b89-ea75126a5355", 00:08:32.285 "assigned_rate_limits": { 00:08:32.285 "rw_ios_per_sec": 0, 00:08:32.285 "rw_mbytes_per_sec": 0, 00:08:32.285 "r_mbytes_per_sec": 0, 00:08:32.285 "w_mbytes_per_sec": 0 00:08:32.285 }, 00:08:32.285 "claimed": false, 00:08:32.285 "zoned": false, 00:08:32.285 "supported_io_types": { 00:08:32.285 "read": true, 00:08:32.285 "write": true, 00:08:32.285 "unmap": true, 00:08:32.285 "flush": false, 00:08:32.285 "reset": true, 00:08:32.285 "nvme_admin": false, 00:08:32.285 "nvme_io": false, 00:08:32.285 "nvme_io_md": false, 00:08:32.285 "write_zeroes": true, 00:08:32.285 "zcopy": false, 00:08:32.285 "get_zone_info": false, 00:08:32.285 "zone_management": false, 00:08:32.285 "zone_append": false, 00:08:32.285 "compare": false, 00:08:32.285 "compare_and_write": false, 00:08:32.285 "abort": false, 00:08:32.285 "seek_hole": true, 00:08:32.285 "seek_data": true, 00:08:32.285 "copy": false, 00:08:32.285 "nvme_iov_md": false 00:08:32.285 }, 00:08:32.285 "driver_specific": { 00:08:32.285 "lvol": { 00:08:32.285 "lvol_store_uuid": "374b7d59-08ca-4608-8777-88cc2a083020", 00:08:32.285 "base_bdev": "aio_bdev", 00:08:32.285 "thin_provision": false, 00:08:32.285 "num_allocated_clusters": 38, 00:08:32.285 "snapshot": false, 00:08:32.285 "clone": false, 00:08:32.285 "esnap_clone": false 00:08:32.285 } 00:08:32.285 } 00:08:32.285 } 00:08:32.285 ] 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:32.546 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:32.806 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:32.806 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.806 [2024-12-06 18:20:27.567475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:33.067 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:33.068 request: 00:08:33.068 { 00:08:33.068 "uuid": "374b7d59-08ca-4608-8777-88cc2a083020", 00:08:33.068 "method": "bdev_lvol_get_lvstores", 00:08:33.068 "req_id": 1 00:08:33.068 } 00:08:33.068 Got JSON-RPC error response 00:08:33.068 response: 00:08:33.068 { 00:08:33.068 "code": -19, 00:08:33.068 "message": "No such device" 00:08:33.068 } 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.068 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.329 aio_bdev 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.329 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.330 18:20:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.330 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b6f91a7c-3497-4262-8b89-ea75126a5355 -t 2000 00:08:33.591 [ 00:08:33.591 { 00:08:33.591 "name": "b6f91a7c-3497-4262-8b89-ea75126a5355", 00:08:33.591 "aliases": [ 00:08:33.591 "lvs/lvol" 00:08:33.591 ], 00:08:33.591 "product_name": "Logical Volume", 00:08:33.591 "block_size": 4096, 00:08:33.591 "num_blocks": 38912, 00:08:33.591 "uuid": "b6f91a7c-3497-4262-8b89-ea75126a5355", 00:08:33.591 "assigned_rate_limits": { 00:08:33.591 "rw_ios_per_sec": 0, 00:08:33.591 "rw_mbytes_per_sec": 0, 00:08:33.591 "r_mbytes_per_sec": 0, 00:08:33.591 "w_mbytes_per_sec": 0 00:08:33.591 }, 00:08:33.591 "claimed": false, 00:08:33.591 "zoned": false, 00:08:33.591 "supported_io_types": { 00:08:33.591 "read": true, 00:08:33.591 "write": true, 00:08:33.591 "unmap": true, 00:08:33.591 "flush": false, 00:08:33.591 "reset": true, 00:08:33.591 "nvme_admin": false, 00:08:33.591 "nvme_io": false, 00:08:33.591 "nvme_io_md": false, 00:08:33.591 "write_zeroes": true, 00:08:33.591 "zcopy": false, 00:08:33.591 "get_zone_info": false, 00:08:33.591 "zone_management": false, 00:08:33.591 "zone_append": false, 00:08:33.591 "compare": false, 00:08:33.591 "compare_and_write": false, 00:08:33.591 "abort": false, 00:08:33.591 "seek_hole": true, 00:08:33.591 "seek_data": true, 00:08:33.591 "copy": false, 00:08:33.591 "nvme_iov_md": false 00:08:33.591 }, 00:08:33.591 "driver_specific": { 00:08:33.591 "lvol": { 00:08:33.591 "lvol_store_uuid": "374b7d59-08ca-4608-8777-88cc2a083020", 00:08:33.591 "base_bdev": "aio_bdev", 00:08:33.591 "thin_provision": false, 00:08:33.591 "num_allocated_clusters": 38, 00:08:33.591 "snapshot": false, 00:08:33.591 "clone": false, 00:08:33.591 "esnap_clone": false 00:08:33.591 } 00:08:33.591 } 00:08:33.591 } 00:08:33.591 ] 00:08:33.591 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:33.591 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:33.591 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.852 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.852 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:33.852 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.852 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.852 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b6f91a7c-3497-4262-8b89-ea75126a5355 00:08:34.112 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 374b7d59-08ca-4608-8777-88cc2a083020 00:08:34.373 18:20:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.374 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:34.374 00:08:34.374 real 0m17.252s 00:08:34.374 user 0m45.701s 00:08:34.374 sys 0m2.915s 00:08:34.374 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.374 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.374 ************************************ 00:08:34.374 END TEST lvs_grow_dirty 00:08:34.374 ************************************ 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:34.634 nvmf_trace.0 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.634 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.634 rmmod nvme_tcp 00:08:34.634 rmmod nvme_fabrics 00:08:34.634 rmmod nvme_keyring 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1956326 ']' 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1956326 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1956326 ']' 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1956326 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1956326 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1956326' 00:08:34.635 killing process with pid 1956326 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1956326 00:08:34.635 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1956326 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.896 18:20:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.808 00:08:36.808 real 0m44.677s 00:08:36.808 user 1m7.720s 00:08:36.808 sys 0m10.541s 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:36.808 ************************************ 00:08:36.808 END TEST nvmf_lvs_grow 00:08:36.808 ************************************ 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.808 18:20:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.069 ************************************ 00:08:37.069 START TEST nvmf_bdev_io_wait 00:08:37.069 ************************************ 00:08:37.069 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:37.069 * Looking for test storage... 00:08:37.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.069 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.069 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.070 --rc genhtml_branch_coverage=1 00:08:37.070 --rc genhtml_function_coverage=1 00:08:37.070 --rc genhtml_legend=1 00:08:37.070 --rc geninfo_all_blocks=1 00:08:37.070 --rc geninfo_unexecuted_blocks=1 00:08:37.070 00:08:37.070 ' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.070 --rc genhtml_branch_coverage=1 00:08:37.070 --rc genhtml_function_coverage=1 00:08:37.070 --rc genhtml_legend=1 00:08:37.070 --rc geninfo_all_blocks=1 00:08:37.070 --rc geninfo_unexecuted_blocks=1 00:08:37.070 00:08:37.070 ' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.070 --rc genhtml_branch_coverage=1 00:08:37.070 --rc genhtml_function_coverage=1 00:08:37.070 --rc genhtml_legend=1 00:08:37.070 --rc geninfo_all_blocks=1 00:08:37.070 --rc geninfo_unexecuted_blocks=1 00:08:37.070 00:08:37.070 ' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.070 --rc genhtml_branch_coverage=1 00:08:37.070 --rc genhtml_function_coverage=1 00:08:37.070 --rc genhtml_legend=1 00:08:37.070 --rc geninfo_all_blocks=1 00:08:37.070 --rc geninfo_unexecuted_blocks=1 00:08:37.070 00:08:37.070 ' 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.070 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.331 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.332 18:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:45.476 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:45.476 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:45.476 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.476 18:20:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:45.476 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.476 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:08:45.477 00:08:45.477 --- 10.0.0.2 ping statistics --- 00:08:45.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.477 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:45.477 00:08:45.477 --- 10.0.0.1 ping statistics --- 00:08:45.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.477 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1961395 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1961395 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1961395 ']' 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.477 18:20:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.477 [2024-12-06 18:20:39.397373] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:45.477 [2024-12-06 18:20:39.397432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.477 [2024-12-06 18:20:39.498348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.477 [2024-12-06 18:20:39.553120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.477 [2024-12-06 18:20:39.553177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.477 [2024-12-06 18:20:39.553186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.477 [2024-12-06 18:20:39.553194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.477 [2024-12-06 18:20:39.553201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.477 [2024-12-06 18:20:39.555233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.477 [2024-12-06 18:20:39.555396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.477 [2024-12-06 18:20:39.555559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.477 [2024-12-06 18:20:39.555559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.477 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.477 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:45.477 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.477 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.477 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 [2024-12-06 18:20:40.351085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 Malloc0 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.740 [2024-12-06 18:20:40.416923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1961628 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1961631 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.740 { 00:08:45.740 "params": { 00:08:45.740 "name": "Nvme$subsystem", 00:08:45.740 "trtype": "$TEST_TRANSPORT", 00:08:45.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.740 "adrfam": "ipv4", 00:08:45.740 "trsvcid": "$NVMF_PORT", 00:08:45.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.740 "hdgst": ${hdgst:-false}, 00:08:45.740 "ddgst": ${ddgst:-false} 00:08:45.740 }, 00:08:45.740 "method": "bdev_nvme_attach_controller" 00:08:45.740 } 00:08:45.740 EOF 00:08:45.740 )") 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1961634 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.740 { 00:08:45.740 "params": { 00:08:45.740 "name": "Nvme$subsystem", 00:08:45.740 "trtype": "$TEST_TRANSPORT", 00:08:45.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.740 "adrfam": "ipv4", 00:08:45.740 "trsvcid": "$NVMF_PORT", 00:08:45.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.740 "hdgst": ${hdgst:-false}, 00:08:45.740 "ddgst": ${ddgst:-false} 00:08:45.740 }, 00:08:45.740 "method": "bdev_nvme_attach_controller" 00:08:45.740 } 00:08:45.740 EOF 00:08:45.740 )") 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1961638 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.740 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.741 { 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme$subsystem", 00:08:45.741 "trtype": "$TEST_TRANSPORT", 00:08:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "$NVMF_PORT", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.741 "hdgst": ${hdgst:-false}, 00:08:45.741 "ddgst": ${ddgst:-false} 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 } 00:08:45.741 EOF 00:08:45.741 )") 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:45.741 { 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme$subsystem", 00:08:45.741 "trtype": "$TEST_TRANSPORT", 00:08:45.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "$NVMF_PORT", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.741 "hdgst": ${hdgst:-false}, 00:08:45.741 "ddgst": ${ddgst:-false} 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 } 00:08:45.741 EOF 00:08:45.741 )") 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1961628 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme1", 00:08:45.741 "trtype": "tcp", 00:08:45.741 "traddr": "10.0.0.2", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "4420", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.741 "hdgst": false, 00:08:45.741 "ddgst": false 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 }' 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme1", 00:08:45.741 "trtype": "tcp", 00:08:45.741 "traddr": "10.0.0.2", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "4420", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.741 "hdgst": false, 00:08:45.741 "ddgst": false 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 }' 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme1", 00:08:45.741 "trtype": "tcp", 00:08:45.741 "traddr": "10.0.0.2", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "4420", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.741 "hdgst": false, 00:08:45.741 "ddgst": false 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 }' 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:45.741 18:20:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:45.741 "params": { 00:08:45.741 "name": "Nvme1", 00:08:45.741 "trtype": "tcp", 00:08:45.741 "traddr": "10.0.0.2", 00:08:45.741 "adrfam": "ipv4", 00:08:45.741 "trsvcid": "4420", 00:08:45.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.741 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.741 "hdgst": false, 00:08:45.741 "ddgst": false 00:08:45.741 }, 00:08:45.741 "method": "bdev_nvme_attach_controller" 00:08:45.741 }' 00:08:45.741 [2024-12-06 18:20:40.476473] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:45.741 [2024-12-06 18:20:40.476543] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:45.741 [2024-12-06 18:20:40.480471] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:45.741 [2024-12-06 18:20:40.480549] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:45.741 [2024-12-06 18:20:40.481556] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:45.741 [2024-12-06 18:20:40.481633] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:45.741 [2024-12-06 18:20:40.482446] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:45.741 [2024-12-06 18:20:40.482506] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:46.003 [2024-12-06 18:20:40.660484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.003 [2024-12-06 18:20:40.698927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:46.003 [2024-12-06 18:20:40.726114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.003 [2024-12-06 18:20:40.766326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:46.264 [2024-12-06 18:20:40.792643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.264 [2024-12-06 18:20:40.831824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:46.264 [2024-12-06 18:20:40.884489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.264 [2024-12-06 18:20:40.925888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:46.264 Running I/O for 1 seconds... 00:08:46.526 Running I/O for 1 seconds... 00:08:46.526 Running I/O for 1 seconds... 00:08:46.526 Running I/O for 1 seconds... 00:08:47.472 10960.00 IOPS, 42.81 MiB/s 00:08:47.472 Latency(us) 00:08:47.472 [2024-12-06T17:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.472 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:47.472 Nvme1n1 : 1.01 11012.43 43.02 0.00 0.00 11576.78 6417.07 18350.08 00:08:47.472 [2024-12-06T17:20:42.256Z] =================================================================================================================== 00:08:47.472 [2024-12-06T17:20:42.256Z] Total : 11012.43 43.02 0.00 0.00 11576.78 6417.07 18350.08 00:08:47.472 180224.00 IOPS, 704.00 MiB/s 00:08:47.472 Latency(us) 00:08:47.472 [2024-12-06T17:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.472 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:47.472 Nvme1n1 : 1.00 179865.61 702.60 0.00 0.00 707.49 298.67 1966.08 00:08:47.472 [2024-12-06T17:20:42.256Z] =================================================================================================================== 00:08:47.472 [2024-12-06T17:20:42.256Z] Total : 179865.61 702.60 0.00 0.00 707.49 298.67 1966.08 00:08:47.472 9328.00 IOPS, 36.44 MiB/s [2024-12-06T17:20:42.256Z] 10012.00 IOPS, 39.11 MiB/s 00:08:47.472 Latency(us) 00:08:47.472 [2024-12-06T17:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.472 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:47.472 Nvme1n1 : 1.01 9380.58 36.64 0.00 0.00 13589.34 6526.29 25231.36 00:08:47.472 [2024-12-06T17:20:42.256Z] =================================================================================================================== 00:08:47.472 [2024-12-06T17:20:42.256Z] Total : 9380.58 36.64 0.00 0.00 13589.34 6526.29 25231.36 00:08:47.472 00:08:47.472 Latency(us) 00:08:47.472 [2024-12-06T17:20:42.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.472 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:47.472 Nvme1n1 : 1.01 10082.90 39.39 0.00 0.00 12649.41 5925.55 23592.96 00:08:47.472 [2024-12-06T17:20:42.256Z] =================================================================================================================== 00:08:47.472 [2024-12-06T17:20:42.256Z] Total : 10082.90 39.39 0.00 0.00 12649.41 5925.55 23592.96 00:08:47.472 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1961631 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1961634 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1961638 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.734 rmmod nvme_tcp 00:08:47.734 rmmod nvme_fabrics 00:08:47.734 rmmod nvme_keyring 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1961395 ']' 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1961395 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1961395 ']' 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1961395 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961395 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961395' 00:08:47.734 killing process with pid 1961395 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1961395 00:08:47.734 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1961395 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.995 18:20:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.539 00:08:50.539 real 0m13.084s 00:08:50.539 user 0m19.525s 00:08:50.539 sys 0m7.575s 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:50.539 ************************************ 00:08:50.539 END TEST nvmf_bdev_io_wait 00:08:50.539 ************************************ 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.539 ************************************ 00:08:50.539 START TEST nvmf_queue_depth 00:08:50.539 ************************************ 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:50.539 * Looking for test storage... 00:08:50.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:50.539 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.540 --rc genhtml_branch_coverage=1 00:08:50.540 --rc genhtml_function_coverage=1 00:08:50.540 --rc genhtml_legend=1 00:08:50.540 --rc geninfo_all_blocks=1 00:08:50.540 --rc geninfo_unexecuted_blocks=1 00:08:50.540 00:08:50.540 ' 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.540 --rc genhtml_branch_coverage=1 00:08:50.540 --rc genhtml_function_coverage=1 00:08:50.540 --rc genhtml_legend=1 00:08:50.540 --rc geninfo_all_blocks=1 00:08:50.540 --rc geninfo_unexecuted_blocks=1 00:08:50.540 00:08:50.540 ' 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.540 --rc genhtml_branch_coverage=1 00:08:50.540 --rc genhtml_function_coverage=1 00:08:50.540 --rc genhtml_legend=1 00:08:50.540 --rc geninfo_all_blocks=1 00:08:50.540 --rc geninfo_unexecuted_blocks=1 00:08:50.540 00:08:50.540 ' 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.540 --rc genhtml_branch_coverage=1 00:08:50.540 --rc genhtml_function_coverage=1 00:08:50.540 --rc genhtml_legend=1 00:08:50.540 --rc geninfo_all_blocks=1 00:08:50.540 --rc geninfo_unexecuted_blocks=1 00:08:50.540 00:08:50.540 ' 00:08:50.540 18:20:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.540 18:20:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:58.679 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:58.679 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:58.679 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:58.679 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.679 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:08:58.680 00:08:58.680 --- 10.0.0.2 ping statistics --- 00:08:58.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.680 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:08:58.680 00:08:58.680 --- 10.0.0.1 ping statistics --- 00:08:58.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.680 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1966269 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1966269 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1966269 ']' 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.680 18:20:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 [2024-12-06 18:20:52.663583] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:58.680 [2024-12-06 18:20:52.663663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.680 [2024-12-06 18:20:52.767827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.680 [2024-12-06 18:20:52.818548] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.680 [2024-12-06 18:20:52.818602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.680 [2024-12-06 18:20:52.818611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.680 [2024-12-06 18:20:52.818618] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.680 [2024-12-06 18:20:52.818625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.680 [2024-12-06 18:20:52.819382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 [2024-12-06 18:20:53.535301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 Malloc0 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 [2024-12-06 18:20:53.599145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1966487 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1966487 /var/tmp/bdevperf.sock 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1966487 ']' 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:58.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.941 18:20:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.941 [2024-12-06 18:20:53.656779] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:08:58.941 [2024-12-06 18:20:53.656841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966487 ] 00:08:59.209 [2024-12-06 18:20:53.751102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.210 [2024-12-06 18:20:53.803833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.983 NVMe0n1 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.983 18:20:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.245 Running I/O for 10 seconds... 00:09:02.128 9256.00 IOPS, 36.16 MiB/s [2024-12-06T17:20:57.853Z] 10544.00 IOPS, 41.19 MiB/s [2024-12-06T17:20:59.238Z] 10924.67 IOPS, 42.67 MiB/s [2024-12-06T17:21:00.179Z] 11123.75 IOPS, 43.45 MiB/s [2024-12-06T17:21:01.121Z] 11486.20 IOPS, 44.87 MiB/s [2024-12-06T17:21:02.063Z] 11800.50 IOPS, 46.10 MiB/s [2024-12-06T17:21:03.021Z] 12027.14 IOPS, 46.98 MiB/s [2024-12-06T17:21:03.962Z] 12257.38 IOPS, 47.88 MiB/s [2024-12-06T17:21:04.905Z] 12397.89 IOPS, 48.43 MiB/s [2024-12-06T17:21:04.905Z] 12494.50 IOPS, 48.81 MiB/s 00:09:10.121 Latency(us) 00:09:10.121 [2024-12-06T17:21:04.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:10.121 Verification LBA range: start 0x0 length 0x4000 00:09:10.121 NVMe0n1 : 10.05 12535.95 48.97 0.00 0.00 81421.88 15837.87 71652.69 00:09:10.121 [2024-12-06T17:21:04.905Z] =================================================================================================================== 00:09:10.121 [2024-12-06T17:21:04.905Z] Total : 12535.95 48.97 0.00 0.00 81421.88 15837.87 71652.69 00:09:10.121 { 00:09:10.121 "results": [ 00:09:10.121 { 00:09:10.121 "job": "NVMe0n1", 00:09:10.121 "core_mask": "0x1", 00:09:10.121 "workload": "verify", 00:09:10.121 "status": "finished", 00:09:10.121 "verify_range": { 00:09:10.121 "start": 0, 00:09:10.121 "length": 16384 00:09:10.121 }, 00:09:10.121 "queue_depth": 1024, 00:09:10.121 "io_size": 4096, 00:09:10.121 "runtime": 10.048537, 00:09:10.121 "iops": 12535.954238910599, 00:09:10.121 "mibps": 48.96857124574453, 00:09:10.121 "io_failed": 0, 00:09:10.121 "io_timeout": 0, 00:09:10.121 "avg_latency_us": 81421.88398154029, 00:09:10.121 "min_latency_us": 15837.866666666667, 00:09:10.121 "max_latency_us": 71652.69333333333 00:09:10.121 } 00:09:10.121 ], 00:09:10.121 "core_count": 1 00:09:10.121 } 00:09:10.121 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1966487 00:09:10.121 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1966487 ']' 00:09:10.121 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1966487 00:09:10.121 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966487 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966487' 00:09:10.382 killing process with pid 1966487 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1966487 00:09:10.382 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.382 00:09:10.382 Latency(us) 00:09:10.382 [2024-12-06T17:21:05.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.382 [2024-12-06T17:21:05.166Z] =================================================================================================================== 00:09:10.382 [2024-12-06T17:21:05.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.382 18:21:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1966487 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.382 rmmod nvme_tcp 00:09:10.382 rmmod nvme_fabrics 00:09:10.382 rmmod nvme_keyring 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1966269 ']' 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1966269 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1966269 ']' 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1966269 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.382 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966269 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966269' 00:09:10.652 killing process with pid 1966269 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1966269 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1966269 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.652 18:21:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.197 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.197 00:09:13.197 real 0m22.610s 00:09:13.197 user 0m25.780s 00:09:13.198 sys 0m7.243s 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:13.198 ************************************ 00:09:13.198 END TEST nvmf_queue_depth 00:09:13.198 ************************************ 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.198 ************************************ 00:09:13.198 START TEST nvmf_target_multipath 00:09:13.198 ************************************ 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:13.198 * Looking for test storage... 00:09:13.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.198 --rc genhtml_branch_coverage=1 00:09:13.198 --rc genhtml_function_coverage=1 00:09:13.198 --rc genhtml_legend=1 00:09:13.198 --rc geninfo_all_blocks=1 00:09:13.198 --rc geninfo_unexecuted_blocks=1 00:09:13.198 00:09:13.198 ' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.198 --rc genhtml_branch_coverage=1 00:09:13.198 --rc genhtml_function_coverage=1 00:09:13.198 --rc genhtml_legend=1 00:09:13.198 --rc geninfo_all_blocks=1 00:09:13.198 --rc geninfo_unexecuted_blocks=1 00:09:13.198 00:09:13.198 ' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.198 --rc genhtml_branch_coverage=1 00:09:13.198 --rc genhtml_function_coverage=1 00:09:13.198 --rc genhtml_legend=1 00:09:13.198 --rc geninfo_all_blocks=1 00:09:13.198 --rc geninfo_unexecuted_blocks=1 00:09:13.198 00:09:13.198 ' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.198 --rc genhtml_branch_coverage=1 00:09:13.198 --rc genhtml_function_coverage=1 00:09:13.198 --rc genhtml_legend=1 00:09:13.198 --rc geninfo_all_blocks=1 00:09:13.198 --rc geninfo_unexecuted_blocks=1 00:09:13.198 00:09:13.198 ' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.198 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.199 18:21:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:21.341 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.341 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:21.342 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:21.342 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:21.342 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:21.342 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.342 18:21:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.342 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:09:21.342 00:09:21.342 --- 10.0.0.2 ping statistics --- 00:09:21.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.343 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:09:21.343 00:09:21.343 --- 10.0.0.1 ping statistics --- 00:09:21.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.343 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:21.343 only one NIC for nvmf test 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.343 rmmod nvme_tcp 00:09:21.343 rmmod nvme_fabrics 00:09:21.343 rmmod nvme_keyring 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.343 18:21:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.728 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.728 00:09:22.728 real 0m10.026s 00:09:22.728 user 0m2.201s 00:09:22.728 sys 0m5.762s 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:22.990 ************************************ 00:09:22.990 END TEST nvmf_target_multipath 00:09:22.990 ************************************ 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.990 ************************************ 00:09:22.990 START TEST nvmf_zcopy 00:09:22.990 ************************************ 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:22.990 * Looking for test storage... 00:09:22.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.990 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.253 --rc genhtml_branch_coverage=1 00:09:23.253 --rc genhtml_function_coverage=1 00:09:23.253 --rc genhtml_legend=1 00:09:23.253 --rc geninfo_all_blocks=1 00:09:23.253 --rc geninfo_unexecuted_blocks=1 00:09:23.253 00:09:23.253 ' 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.253 --rc genhtml_branch_coverage=1 00:09:23.253 --rc genhtml_function_coverage=1 00:09:23.253 --rc genhtml_legend=1 00:09:23.253 --rc geninfo_all_blocks=1 00:09:23.253 --rc geninfo_unexecuted_blocks=1 00:09:23.253 00:09:23.253 ' 00:09:23.253 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.254 --rc genhtml_branch_coverage=1 00:09:23.254 --rc genhtml_function_coverage=1 00:09:23.254 --rc genhtml_legend=1 00:09:23.254 --rc geninfo_all_blocks=1 00:09:23.254 --rc geninfo_unexecuted_blocks=1 00:09:23.254 00:09:23.254 ' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.254 --rc genhtml_branch_coverage=1 00:09:23.254 --rc genhtml_function_coverage=1 00:09:23.254 --rc genhtml_legend=1 00:09:23.254 --rc geninfo_all_blocks=1 00:09:23.254 --rc geninfo_unexecuted_blocks=1 00:09:23.254 00:09:23.254 ' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.254 18:21:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:31.400 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:31.400 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:31.400 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:31.400 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.400 18:21:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.400 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:09:31.400 00:09:31.400 --- 10.0.0.2 ping statistics --- 00:09:31.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.400 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:09:31.401 00:09:31.401 --- 10.0.0.1 ping statistics --- 00:09:31.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.401 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1977775 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1977775 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1977775 ']' 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.401 18:21:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.401 [2024-12-06 18:21:25.384110] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:09:31.401 [2024-12-06 18:21:25.384182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.401 [2024-12-06 18:21:25.482279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.401 [2024-12-06 18:21:25.532718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.401 [2024-12-06 18:21:25.532768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.401 [2024-12-06 18:21:25.532776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.401 [2024-12-06 18:21:25.532783] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.401 [2024-12-06 18:21:25.532789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.401 [2024-12-06 18:21:25.533579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 [2024-12-06 18:21:26.244115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 [2024-12-06 18:21:26.268323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 malloc0 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:31.662 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:31.663 { 00:09:31.663 "params": { 00:09:31.663 "name": "Nvme$subsystem", 00:09:31.663 "trtype": "$TEST_TRANSPORT", 00:09:31.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.663 "adrfam": "ipv4", 00:09:31.663 "trsvcid": "$NVMF_PORT", 00:09:31.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.663 "hdgst": ${hdgst:-false}, 00:09:31.663 "ddgst": ${ddgst:-false} 00:09:31.663 }, 00:09:31.663 "method": "bdev_nvme_attach_controller" 00:09:31.663 } 00:09:31.663 EOF 00:09:31.663 )") 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:31.663 18:21:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:31.663 "params": { 00:09:31.663 "name": "Nvme1", 00:09:31.663 "trtype": "tcp", 00:09:31.663 "traddr": "10.0.0.2", 00:09:31.663 "adrfam": "ipv4", 00:09:31.663 "trsvcid": "4420", 00:09:31.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.663 "hdgst": false, 00:09:31.663 "ddgst": false 00:09:31.663 }, 00:09:31.663 "method": "bdev_nvme_attach_controller" 00:09:31.663 }' 00:09:31.663 [2024-12-06 18:21:26.378514] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:09:31.663 [2024-12-06 18:21:26.378578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978100 ] 00:09:31.924 [2024-12-06 18:21:26.473199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.924 [2024-12-06 18:21:26.526105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.184 Running I/O for 10 seconds... 00:09:34.069 6511.00 IOPS, 50.87 MiB/s [2024-12-06T17:21:29.795Z] 6560.50 IOPS, 51.25 MiB/s [2024-12-06T17:21:31.179Z] 6578.33 IOPS, 51.39 MiB/s [2024-12-06T17:21:32.120Z] 7288.00 IOPS, 56.94 MiB/s [2024-12-06T17:21:33.060Z] 7798.40 IOPS, 60.92 MiB/s [2024-12-06T17:21:33.999Z] 8137.50 IOPS, 63.57 MiB/s [2024-12-06T17:21:34.945Z] 8378.86 IOPS, 65.46 MiB/s [2024-12-06T17:21:35.886Z] 8558.25 IOPS, 66.86 MiB/s [2024-12-06T17:21:36.825Z] 8699.33 IOPS, 67.96 MiB/s [2024-12-06T17:21:36.825Z] 8809.90 IOPS, 68.83 MiB/s 00:09:42.041 Latency(us) 00:09:42.041 [2024-12-06T17:21:36.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.041 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.041 Verification LBA range: start 0x0 length 0x1000 00:09:42.041 Nvme1n1 : 10.05 8774.99 68.55 0.00 0.00 14490.14 2867.20 44346.03 00:09:42.041 [2024-12-06T17:21:36.825Z] =================================================================================================================== 00:09:42.041 [2024-12-06T17:21:36.825Z] Total : 8774.99 68.55 0.00 0.00 14490.14 2867.20 44346.03 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1980115 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.302 { 00:09:42.302 "params": { 00:09:42.302 "name": "Nvme$subsystem", 00:09:42.302 "trtype": "$TEST_TRANSPORT", 00:09:42.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.302 "adrfam": "ipv4", 00:09:42.302 "trsvcid": "$NVMF_PORT", 00:09:42.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.302 "hdgst": ${hdgst:-false}, 00:09:42.302 "ddgst": ${ddgst:-false} 00:09:42.302 }, 00:09:42.302 "method": "bdev_nvme_attach_controller" 00:09:42.302 } 00:09:42.302 EOF 00:09:42.302 )") 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:42.302 [2024-12-06 18:21:36.921844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.921873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:42.302 18:21:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.302 "params": { 00:09:42.302 "name": "Nvme1", 00:09:42.302 "trtype": "tcp", 00:09:42.302 "traddr": "10.0.0.2", 00:09:42.302 "adrfam": "ipv4", 00:09:42.302 "trsvcid": "4420", 00:09:42.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.302 "hdgst": false, 00:09:42.302 "ddgst": false 00:09:42.302 }, 00:09:42.302 "method": "bdev_nvme_attach_controller" 00:09:42.302 }' 00:09:42.302 [2024-12-06 18:21:36.933841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.933851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:36.945872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.945880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:36.957903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.957910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:36.964421] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:09:42.302 [2024-12-06 18:21:36.964470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1980115 ] 00:09:42.302 [2024-12-06 18:21:36.969932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.969941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:36.981963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.981972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:36.993994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:36.994002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:37.006025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:37.006038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.302 [2024-12-06 18:21:37.018056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.302 [2024-12-06 18:21:37.018064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.303 [2024-12-06 18:21:37.030086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.303 [2024-12-06 18:21:37.030093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.303 [2024-12-06 18:21:37.042116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.303 [2024-12-06 18:21:37.042123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.303 [2024-12-06 18:21:37.044943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.303 [2024-12-06 18:21:37.054148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.303 [2024-12-06 18:21:37.054157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.303 [2024-12-06 18:21:37.066179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.303 [2024-12-06 18:21:37.066189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.303 [2024-12-06 18:21:37.074478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.303 [2024-12-06 18:21:37.078210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.303 [2024-12-06 18:21:37.078219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.090247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.090258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.102277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.102289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.114305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.114316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.126335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.126344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.138365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.138373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.150405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.150422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.162432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.162442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.174463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.174472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.186493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.186501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.198524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.198532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.210555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.210564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.222589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.222603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.234620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.234628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.246654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.246662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.258687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.258694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.270718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.564 [2024-12-06 18:21:37.270728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.564 [2024-12-06 18:21:37.282747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.282755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.565 [2024-12-06 18:21:37.294778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.294787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.565 [2024-12-06 18:21:37.306810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.306819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.565 [2024-12-06 18:21:37.318842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.318852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.565 [2024-12-06 18:21:37.330875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.330884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.565 [2024-12-06 18:21:37.342905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.565 [2024-12-06 18:21:37.342914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.354936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.354945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.366978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.366995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 Running I/O for 5 seconds... 00:09:42.824 [2024-12-06 18:21:37.379002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.379011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.394583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.394599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.408169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.408186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.421997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.422013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.435345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.435362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.448417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.448433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.461869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.461884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.474397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.474413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.487696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.487712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.501590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.501606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.515172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.515188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.528304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.528320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.540695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.540711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.554182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.554197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.567506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.567521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.580286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.580301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.593433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.593449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.824 [2024-12-06 18:21:37.606753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.824 [2024-12-06 18:21:37.606767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.620460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.620476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.633843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.633859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.646769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.646785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.659600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.659616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.673051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.673067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.686927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.686942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.699681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.699697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.712269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.712284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.725792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.725808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.738279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.738295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.751258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.751273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.765091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.765106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.778333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.778348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.791855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.791870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.805194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.805208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.817816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.817830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.830987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.831003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.843845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.843860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.085 [2024-12-06 18:21:37.856395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.085 [2024-12-06 18:21:37.856411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.345 [2024-12-06 18:21:37.868951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.868967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.882379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.882395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.895085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.895101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.907375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.907390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.921126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.921141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.934507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.934522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.947695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.947710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.960452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.960467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.972833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.972848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.986338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.986352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:37.999068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:37.999083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.012802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.012817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.025911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.025926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.039616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.039631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.052998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.053013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.066177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.066192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.079492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.079507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.092257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.092272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.104597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.104611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.346 [2024-12-06 18:21:38.118030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.346 [2024-12-06 18:21:38.118046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.130441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.130457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.142946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.142961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.155477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.155492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.168220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.168236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.181657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.181672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.195154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.195169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.207789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.207804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.220471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.220486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.232977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.232992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.246160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.246174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.259528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.259543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.273353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.273368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.285962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.285976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.299382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.299397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.312502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.312517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.325056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.325072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.338038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.338053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.351352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.351368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 [2024-12-06 18:21:38.364875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.364890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.606 19172.00 IOPS, 149.78 MiB/s [2024-12-06T17:21:38.390Z] [2024-12-06 18:21:38.377785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.606 [2024-12-06 18:21:38.377800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.391299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.391314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.403917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.403931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.417009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.417024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.429610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.429624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.443090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.443109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.456360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.456375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.468956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.468972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.481387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.481402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.494195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.494210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.507192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.507206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.520365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.520380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.533741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.533757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.546080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.546095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.559116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.559131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.571739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.571753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.585261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.585276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.598106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.598120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.610987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.611002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.623605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.623620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.636193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.636207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.867 [2024-12-06 18:21:38.648819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.867 [2024-12-06 18:21:38.648834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.662070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.662086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.675426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.675440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.688498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.688517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.701223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.701237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.713452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.713467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.726088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.726103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.738750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.738764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.751240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.751254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.764553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.764568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.777980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.777994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.790617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.790632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.804284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.804299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.817605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.817619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.830835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.830850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.843952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.843968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.857476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.857491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.870706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.870720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.883571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.883585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.896936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.896951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.128 [2024-12-06 18:21:38.909478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.128 [2024-12-06 18:21:38.909494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.921600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.921616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.933899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.933918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.947666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.947681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.961091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.961106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.974470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.974486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:38.987861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:38.987877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.000246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.000262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.013313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.013329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.026695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.026711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.039765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.039780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.053239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.053255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.066625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.066644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.080204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.080219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.093364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.093380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.106775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.106791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.120197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.120213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.133223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.133238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.146743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.146758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.159358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.159373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.389 [2024-12-06 18:21:39.172415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.389 [2024-12-06 18:21:39.172430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.184900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.184916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.197524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.197540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.211347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.211363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.224656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.224672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.238062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.238077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.251579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.251595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.264925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.264941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.277903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.277918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.650 [2024-12-06 18:21:39.291036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.650 [2024-12-06 18:21:39.291051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.304068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.304083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.317413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.317429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.330953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.330968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.343705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.343720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.357295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.357313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.370761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.370777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 19273.00 IOPS, 150.57 MiB/s [2024-12-06T17:21:39.435Z] [2024-12-06 18:21:39.383302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.383317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.396854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.396870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.409901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.409916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.651 [2024-12-06 18:21:39.423206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.651 [2024-12-06 18:21:39.423222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.437011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.437029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.450518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.450534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.463939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.463954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.476988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.477003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.490578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.490595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.503677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.503692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.516803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.516818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.530072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.530087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.543467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.543482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.556688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.556703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.569450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.569465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.582609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.582623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.596260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.596275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.609647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.609662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.623138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.623153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.635880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.635895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.649489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.649504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.662283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.662298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.675367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.675386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.912 [2024-12-06 18:21:39.688712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.912 [2024-12-06 18:21:39.688728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.701513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.701529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.714034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.714049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.727347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.727362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.740979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.740994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.754180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.754196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.767571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.767586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.780959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.780974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.794346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.794360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.807625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.807645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.820355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.820370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.833601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.833615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.846195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.846210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.858976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.858991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.872216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.872230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.884867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.884882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.897980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.897995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.910338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.910353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.923708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.923727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.936218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.936233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.174 [2024-12-06 18:21:39.948710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.174 [2024-12-06 18:21:39.948725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:39.962146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:39.962162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:39.975574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:39.975590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:39.988960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:39.988976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.002606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.002622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.016127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.016143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.028888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.028903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.041895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.041910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.055110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.055125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.067782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.067798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.080932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.080947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.094370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.094385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.106680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.106695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.119518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.119533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.128176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.128191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.136909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.136924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.145742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.145757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.154599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.154618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.163759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.163774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.434 [2024-12-06 18:21:40.176421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.434 [2024-12-06 18:21:40.176436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.435 [2024-12-06 18:21:40.189734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.435 [2024-12-06 18:21:40.189748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.435 [2024-12-06 18:21:40.203095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.435 [2024-12-06 18:21:40.203110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.435 [2024-12-06 18:21:40.215624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.435 [2024-12-06 18:21:40.215644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.228622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.228641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.241199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.241214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.254466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.254481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.267924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.267939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.281391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.281405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.293727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.293743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.306733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.306747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.319584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.319599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.331945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.331960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.345571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.345586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.359040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.359055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 [2024-12-06 18:21:40.372291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.372306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.695 19297.00 IOPS, 150.76 MiB/s [2024-12-06T17:21:40.479Z] [2024-12-06 18:21:40.385257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.695 [2024-12-06 18:21:40.385273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.398430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.398449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.411477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.411492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.425002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.425016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.437429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.437444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.450663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.450678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.463847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.463862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.696 [2024-12-06 18:21:40.477347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.696 [2024-12-06 18:21:40.477362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.490342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.490358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.503911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.503927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.517295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.517311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.530647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.530662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.543820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.543835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.556245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.556261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.569851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.569866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.583283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.583298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.596197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.596212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.609529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.609544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.622567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.622582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.634775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.634790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.647040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.647055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.660149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.660164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.673238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.673253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.686575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.686591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.699605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.699620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.712245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.712260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.955 [2024-12-06 18:21:40.725420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.955 [2024-12-06 18:21:40.725435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.738652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.738668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.752108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.752123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.765770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.765785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.779339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.779354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.792891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.792907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.805366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.805381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.818776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.818792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.832258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.832273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.844924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.844940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.858058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.858073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.215 [2024-12-06 18:21:40.871155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.215 [2024-12-06 18:21:40.871171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.884536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.884551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.898392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.898407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.911200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.911215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.924392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.924407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.937317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.937332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.950852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.950868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.963955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.963970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.976814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.976829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.216 [2024-12-06 18:21:40.990152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.216 [2024-12-06 18:21:40.990167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.002997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.003013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.015408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.015423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.028612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.028627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.041425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.041440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.054241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.054256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.067512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.067527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.080508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.080523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.094032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.094048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.106529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.106544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.118957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.118973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.132194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.132209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.144935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.144950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.157463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.476 [2024-12-06 18:21:41.157478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.476 [2024-12-06 18:21:41.171242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.171257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.184393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.184408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.197006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.197021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.209779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.209795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.223410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.223426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.236322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.236337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.477 [2024-12-06 18:21:41.249806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.477 [2024-12-06 18:21:41.249821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.263251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.263266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.277125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.277140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.289731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.289745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.302416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.302431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.316077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.316092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.328843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.328858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.342011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.342027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.355366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.355381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.368169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.368184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.381529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.381550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 19307.00 IOPS, 150.84 MiB/s [2024-12-06T17:21:41.522Z] [2024-12-06 18:21:41.394654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.394669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.407825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.407841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.421381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.421395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.434140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.434155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.447377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.447392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.460528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.460543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.474214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.474228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.487140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.487155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.500051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.500066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.738 [2024-12-06 18:21:41.512989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.738 [2024-12-06 18:21:41.513004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.526468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.526483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.539004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.539019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.552270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.552286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.565626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.565645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.579267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.579283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.592520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.592535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.605444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.605459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.618248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.618263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.631687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.631709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.644342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.644358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.657955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.657970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.670547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.670562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.683596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.683610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.696705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.696720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.709836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.709851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.722978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.722993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.735996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.736011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.749063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.749078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.762323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.762337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.999 [2024-12-06 18:21:41.774957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.999 [2024-12-06 18:21:41.774972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.788147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.788163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.801590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.801605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.814852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.814867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.828131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.828146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.841293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.841308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.854861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.854877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.868422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.868437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.881576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.881596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.894755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.894770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.907169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.907184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.920686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.920701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.933575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.933589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.945750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.945765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.958563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.958578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.971539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.971554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.983949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.983964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:41.996991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.259 [2024-12-06 18:21:41.997007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.259 [2024-12-06 18:21:42.009982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.260 [2024-12-06 18:21:42.009997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.260 [2024-12-06 18:21:42.023029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.260 [2024-12-06 18:21:42.023044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.260 [2024-12-06 18:21:42.036318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.260 [2024-12-06 18:21:42.036333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.518 [2024-12-06 18:21:42.049746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.518 [2024-12-06 18:21:42.049762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.518 [2024-12-06 18:21:42.062343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.062358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.074881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.074896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.088129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.088145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.100600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.100615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.113411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.113426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.127056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.127071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.139952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.139967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.153042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.153057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.165817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.165831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.178414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.178430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.191642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.191657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.204352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.204367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.217203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.217218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.229678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.229693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.242168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.242183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.254642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.254656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.267407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.267422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.280417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.280432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.519 [2024-12-06 18:21:42.293996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.519 [2024-12-06 18:21:42.294010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.307368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.307383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.320245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.320260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.333636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.333654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.346336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.346352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.359760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.359775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.373086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.373101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.385961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.385975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 19327.40 IOPS, 151.00 MiB/s 00:09:47.778 Latency(us) 00:09:47.778 [2024-12-06T17:21:42.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.778 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:47.778 Nvme1n1 : 5.01 19329.24 151.01 0.00 0.00 6616.59 3058.35 16930.13 00:09:47.778 [2024-12-06T17:21:42.562Z] =================================================================================================================== 00:09:47.778 [2024-12-06T17:21:42.562Z] Total : 19329.24 151.01 0.00 0.00 6616.59 3058.35 16930.13 00:09:47.778 [2024-12-06 18:21:42.395739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.395755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.407765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.407777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.419800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.419812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.431829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.431841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.443859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.443871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.455885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.455896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.467914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.467924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.479949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.479960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 [2024-12-06 18:21:42.491979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.778 [2024-12-06 18:21:42.491988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1980115) - No such process 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1980115 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.778 delay0 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.778 18:21:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:48.038 [2024-12-06 18:21:42.663987] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:56.268 Initializing NVMe Controllers 00:09:56.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.268 Initialization complete. Launching workers. 00:09:56.268 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 40293 00:09:56.268 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 40413, failed to submit 113 00:09:56.268 success 40327, unsuccessful 86, failed 0 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.268 rmmod nvme_tcp 00:09:56.268 rmmod nvme_fabrics 00:09:56.268 rmmod nvme_keyring 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1977775 ']' 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1977775 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1977775 ']' 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1977775 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977775 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977775' 00:09:56.268 killing process with pid 1977775 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1977775 00:09:56.268 18:21:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1977775 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.268 18:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.651 00:09:57.651 real 0m34.534s 00:09:57.651 user 0m45.288s 00:09:57.651 sys 0m12.159s 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.651 ************************************ 00:09:57.651 END TEST nvmf_zcopy 00:09:57.651 ************************************ 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.651 ************************************ 00:09:57.651 START TEST nvmf_nmic 00:09:57.651 ************************************ 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.651 * Looking for test storage... 00:09:57.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.651 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.652 --rc genhtml_branch_coverage=1 00:09:57.652 --rc genhtml_function_coverage=1 00:09:57.652 --rc genhtml_legend=1 00:09:57.652 --rc geninfo_all_blocks=1 00:09:57.652 --rc geninfo_unexecuted_blocks=1 00:09:57.652 00:09:57.652 ' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.652 --rc genhtml_branch_coverage=1 00:09:57.652 --rc genhtml_function_coverage=1 00:09:57.652 --rc genhtml_legend=1 00:09:57.652 --rc geninfo_all_blocks=1 00:09:57.652 --rc geninfo_unexecuted_blocks=1 00:09:57.652 00:09:57.652 ' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.652 --rc genhtml_branch_coverage=1 00:09:57.652 --rc genhtml_function_coverage=1 00:09:57.652 --rc genhtml_legend=1 00:09:57.652 --rc geninfo_all_blocks=1 00:09:57.652 --rc geninfo_unexecuted_blocks=1 00:09:57.652 00:09:57.652 ' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.652 --rc genhtml_branch_coverage=1 00:09:57.652 --rc genhtml_function_coverage=1 00:09:57.652 --rc genhtml_legend=1 00:09:57.652 --rc geninfo_all_blocks=1 00:09:57.652 --rc geninfo_unexecuted_blocks=1 00:09:57.652 00:09:57.652 ' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.652 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.912 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.913 18:21:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.053 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:10:06.054 00:10:06.054 --- 10.0.0.2 ping statistics --- 00:10:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.054 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:10:06.054 00:10:06.054 --- 10.0.0.1 ping statistics --- 00:10:06.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.054 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1986827 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1986827 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1986827 ']' 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.054 18:21:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-12-06 18:21:59.815867] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:10:06.054 [2024-12-06 18:21:59.815935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.054 [2024-12-06 18:21:59.915807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.054 [2024-12-06 18:21:59.970926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.054 [2024-12-06 18:21:59.970978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.054 [2024-12-06 18:21:59.970987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.054 [2024-12-06 18:21:59.970994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.054 [2024-12-06 18:21:59.971000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.054 [2024-12-06 18:21:59.973001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.054 [2024-12-06 18:21:59.973166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.054 [2024-12-06 18:21:59.973327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.054 [2024-12-06 18:21:59.973328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-12-06 18:22:00.664694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 Malloc0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-12-06 18:22:00.736837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:06.054 test case1: single bdev can't be used in multiple subsystems 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.054 [2024-12-06 18:22:00.772762] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:06.054 [2024-12-06 18:22:00.772784] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:06.054 [2024-12-06 18:22:00.772792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.054 request: 00:10:06.054 { 00:10:06.054 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:06.054 "namespace": { 00:10:06.054 "bdev_name": "Malloc0", 00:10:06.054 "no_auto_visible": false, 00:10:06.054 "hide_metadata": false 00:10:06.054 }, 00:10:06.054 "method": "nvmf_subsystem_add_ns", 00:10:06.054 "req_id": 1 00:10:06.054 } 00:10:06.054 Got JSON-RPC error response 00:10:06.054 response: 00:10:06.054 { 00:10:06.054 "code": -32602, 00:10:06.054 "message": "Invalid parameters" 00:10:06.054 } 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:06.054 Adding namespace failed - expected result. 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:06.054 test case2: host connect to nvmf target in multiple paths 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.054 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.055 [2024-12-06 18:22:00.784908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:06.055 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.055 18:22:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.967 18:22:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:09.350 18:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:09.350 18:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:09.350 18:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:09.350 18:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:09.350 18:22:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:11.259 18:22:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:11.259 [global] 00:10:11.259 thread=1 00:10:11.259 invalidate=1 00:10:11.259 rw=write 00:10:11.259 time_based=1 00:10:11.259 runtime=1 00:10:11.259 ioengine=libaio 00:10:11.259 direct=1 00:10:11.259 bs=4096 00:10:11.259 iodepth=1 00:10:11.259 norandommap=0 00:10:11.259 numjobs=1 00:10:11.259 00:10:11.259 verify_dump=1 00:10:11.259 verify_backlog=512 00:10:11.259 verify_state_save=0 00:10:11.259 do_verify=1 00:10:11.259 verify=crc32c-intel 00:10:11.259 [job0] 00:10:11.259 filename=/dev/nvme0n1 00:10:11.259 Could not set queue depth (nvme0n1) 00:10:11.520 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.520 fio-3.35 00:10:11.520 Starting 1 thread 00:10:12.925 00:10:12.925 job0: (groupid=0, jobs=1): err= 0: pid=1988362: Fri Dec 6 18:22:07 2024 00:10:12.925 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:12.925 slat (nsec): min=24297, max=53393, avg=25365.76, stdev=2972.63 00:10:12.925 clat (usec): min=687, max=1516, avg=1083.18, stdev=90.46 00:10:12.925 lat (usec): min=711, max=1541, avg=1108.55, stdev=90.21 00:10:12.925 clat percentiles (usec): 00:10:12.925 | 1.00th=[ 791], 5.00th=[ 922], 10.00th=[ 963], 20.00th=[ 1020], 00:10:12.925 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:10:12.925 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1188], 00:10:12.925 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1516], 99.95th=[ 1516], 00:10:12.925 | 99.99th=[ 1516] 00:10:12.925 write: IOPS=679, BW=2717KiB/s (2782kB/s)(2720KiB/1001msec); 0 zone resets 00:10:12.925 slat (nsec): min=9387, max=67413, avg=28355.14, stdev=9054.52 00:10:12.925 clat (usec): min=226, max=962, avg=593.99, stdev=102.19 00:10:12.925 lat (usec): min=236, max=994, avg=622.35, stdev=105.98 00:10:12.925 clat percentiles (usec): 00:10:12.925 | 1.00th=[ 330], 5.00th=[ 400], 10.00th=[ 453], 20.00th=[ 506], 00:10:12.925 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:10:12.925 | 70.00th=[ 660], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 734], 00:10:12.925 | 99.00th=[ 783], 99.50th=[ 816], 99.90th=[ 963], 99.95th=[ 963], 00:10:12.925 | 99.99th=[ 963] 00:10:12.925 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.925 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.925 lat (usec) : 250=0.08%, 500=10.57%, 750=44.80%, 1000=8.47% 00:10:12.925 lat (msec) : 2=36.07% 00:10:12.925 cpu : usr=2.60%, sys=2.50%, ctx=1192, majf=0, minf=1 00:10:12.925 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.925 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.925 issued rwts: total=512,680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.925 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.925 00:10:12.925 Run status group 0 (all jobs): 00:10:12.925 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:12.925 WRITE: bw=2717KiB/s (2782kB/s), 2717KiB/s-2717KiB/s (2782kB/s-2782kB/s), io=2720KiB (2785kB), run=1001-1001msec 00:10:12.925 00:10:12.925 Disk stats (read/write): 00:10:12.925 nvme0n1: ios=562/516, merge=0/0, ticks=585/299, in_queue=884, util=93.69% 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.925 rmmod nvme_tcp 00:10:12.925 rmmod nvme_fabrics 00:10:12.925 rmmod nvme_keyring 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1986827 ']' 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1986827 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1986827 ']' 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1986827 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.925 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1986827 00:10:13.185 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.185 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.185 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1986827' 00:10:13.185 killing process with pid 1986827 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1986827 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1986827 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.186 18:22:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.730 00:10:15.730 real 0m17.732s 00:10:15.730 user 0m49.175s 00:10:15.730 sys 0m6.367s 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:15.730 ************************************ 00:10:15.730 END TEST nvmf_nmic 00:10:15.730 ************************************ 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.730 18:22:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.730 ************************************ 00:10:15.730 START TEST nvmf_fio_target 00:10:15.730 ************************************ 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:15.730 * Looking for test storage... 00:10:15.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.730 --rc genhtml_branch_coverage=1 00:10:15.730 --rc genhtml_function_coverage=1 00:10:15.730 --rc genhtml_legend=1 00:10:15.730 --rc geninfo_all_blocks=1 00:10:15.730 --rc geninfo_unexecuted_blocks=1 00:10:15.730 00:10:15.730 ' 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.730 --rc genhtml_branch_coverage=1 00:10:15.730 --rc genhtml_function_coverage=1 00:10:15.730 --rc genhtml_legend=1 00:10:15.730 --rc geninfo_all_blocks=1 00:10:15.730 --rc geninfo_unexecuted_blocks=1 00:10:15.730 00:10:15.730 ' 00:10:15.730 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.730 --rc genhtml_branch_coverage=1 00:10:15.730 --rc genhtml_function_coverage=1 00:10:15.730 --rc genhtml_legend=1 00:10:15.730 --rc geninfo_all_blocks=1 00:10:15.730 --rc geninfo_unexecuted_blocks=1 00:10:15.730 00:10:15.731 ' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.731 --rc genhtml_branch_coverage=1 00:10:15.731 --rc genhtml_function_coverage=1 00:10:15.731 --rc genhtml_legend=1 00:10:15.731 --rc geninfo_all_blocks=1 00:10:15.731 --rc geninfo_unexecuted_blocks=1 00:10:15.731 00:10:15.731 ' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.731 18:22:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:23.908 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:23.908 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:23.908 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:23.908 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:10:23.908 00:10:23.908 --- 10.0.0.2 ping statistics --- 00:10:23.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.908 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:23.908 00:10:23.908 --- 10.0.0.1 ping statistics --- 00:10:23.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.908 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1992940 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1992940 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1992940 ']' 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.908 18:22:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 [2024-12-06 18:22:17.663960] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:10:23.908 [2024-12-06 18:22:17.664029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.908 [2024-12-06 18:22:17.760908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.908 [2024-12-06 18:22:17.814445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.908 [2024-12-06 18:22:17.814503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.908 [2024-12-06 18:22:17.814512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.908 [2024-12-06 18:22:17.814519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.908 [2024-12-06 18:22:17.814525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.908 [2024-12-06 18:22:17.816953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.908 [2024-12-06 18:22:17.817113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.908 [2024-12-06 18:22:17.817150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.908 [2024-12-06 18:22:17.817153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.908 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.169 [2024-12-06 18:22:18.707626] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.169 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.431 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:24.431 18:22:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.693 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:24.693 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.953 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:24.954 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.954 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:24.954 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:25.213 18:22:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.473 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:25.473 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.733 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:25.733 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.733 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:25.733 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:25.993 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:26.254 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:26.254 18:22:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.254 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:26.254 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:26.514 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.775 [2024-12-06 18:22:21.366228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.775 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:27.036 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:27.036 18:22:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:28.951 18:22:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:30.866 18:22:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:30.866 [global] 00:10:30.866 thread=1 00:10:30.866 invalidate=1 00:10:30.866 rw=write 00:10:30.866 time_based=1 00:10:30.866 runtime=1 00:10:30.866 ioengine=libaio 00:10:30.866 direct=1 00:10:30.866 bs=4096 00:10:30.866 iodepth=1 00:10:30.866 norandommap=0 00:10:30.866 numjobs=1 00:10:30.866 00:10:30.866 verify_dump=1 00:10:30.866 verify_backlog=512 00:10:30.866 verify_state_save=0 00:10:30.866 do_verify=1 00:10:30.866 verify=crc32c-intel 00:10:30.866 [job0] 00:10:30.866 filename=/dev/nvme0n1 00:10:30.866 [job1] 00:10:30.866 filename=/dev/nvme0n2 00:10:30.867 [job2] 00:10:30.867 filename=/dev/nvme0n3 00:10:30.867 [job3] 00:10:30.867 filename=/dev/nvme0n4 00:10:30.867 Could not set queue depth (nvme0n1) 00:10:30.867 Could not set queue depth (nvme0n2) 00:10:30.867 Could not set queue depth (nvme0n3) 00:10:30.867 Could not set queue depth (nvme0n4) 00:10:31.127 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.127 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.127 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.127 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.127 fio-3.35 00:10:31.127 Starting 4 threads 00:10:32.529 00:10:32.529 job0: (groupid=0, jobs=1): err= 0: pid=1994638: Fri Dec 6 18:22:26 2024 00:10:32.529 read: IOPS=19, BW=77.1KiB/s (78.9kB/s)(80.0KiB/1038msec) 00:10:32.529 slat (nsec): min=25653, max=26572, avg=25956.40, stdev=216.00 00:10:32.529 clat (usec): min=1054, max=42856, avg=39843.82, stdev=9138.14 00:10:32.529 lat (usec): min=1080, max=42882, avg=39869.78, stdev=9138.04 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:10:32.529 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:32.529 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:32.529 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:32.529 | 99.99th=[42730] 00:10:32.529 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:32.529 slat (nsec): min=9158, max=52367, avg=19555.94, stdev=11828.61 00:10:32.529 clat (usec): min=114, max=918, avg=445.48, stdev=201.43 00:10:32.529 lat (usec): min=123, max=951, avg=465.03, stdev=210.78 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 124], 5.00th=[ 149], 10.00th=[ 247], 20.00th=[ 269], 00:10:32.529 | 30.00th=[ 285], 40.00th=[ 343], 50.00th=[ 388], 60.00th=[ 465], 00:10:32.529 | 70.00th=[ 578], 80.00th=[ 660], 90.00th=[ 750], 95.00th=[ 807], 00:10:32.529 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:10:32.529 | 99.99th=[ 922] 00:10:32.529 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.529 lat (usec) : 250=10.90%, 500=50.19%, 750=25.75%, 1000=9.40% 00:10:32.529 lat (msec) : 2=0.19%, 50=3.57% 00:10:32.529 cpu : usr=0.77%, sys=0.96%, ctx=532, majf=0, minf=1 00:10:32.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.529 job1: (groupid=0, jobs=1): err= 0: pid=1994642: Fri Dec 6 18:22:26 2024 00:10:32.529 read: IOPS=15, BW=63.9KiB/s (65.4kB/s)(64.0KiB/1002msec) 00:10:32.529 slat (nsec): min=26191, max=27350, avg=26429.25, stdev=296.30 00:10:32.529 clat (usec): min=40951, max=42786, avg=41707.30, stdev=536.56 00:10:32.529 lat (usec): min=40977, max=42813, avg=41733.73, stdev=536.66 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:32.529 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:32.529 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:32.529 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:32.529 | 99.99th=[42730] 00:10:32.529 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:32.529 slat (nsec): min=9038, max=70671, avg=31812.89, stdev=9121.22 00:10:32.529 clat (usec): min=155, max=1025, avg=613.45, stdev=147.06 00:10:32.529 lat (usec): min=165, max=1059, avg=645.27, stdev=150.51 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 273], 5.00th=[ 343], 10.00th=[ 416], 20.00th=[ 490], 00:10:32.529 | 30.00th=[ 537], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:10:32.529 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 857], 00:10:32.529 | 99.00th=[ 955], 99.50th=[ 996], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:32.529 | 99.99th=[ 1029] 00:10:32.529 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.529 lat (usec) : 250=0.76%, 500=21.02%, 750=60.42%, 1000=14.58% 00:10:32.529 lat (msec) : 2=0.19%, 50=3.03% 00:10:32.529 cpu : usr=0.90%, sys=2.20%, ctx=529, majf=0, minf=1 00:10:32.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.529 job2: (groupid=0, jobs=1): err= 0: pid=1994653: Fri Dec 6 18:22:26 2024 00:10:32.529 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:32.529 slat (nsec): min=7399, max=42525, avg=25249.04, stdev=2869.79 00:10:32.529 clat (usec): min=586, max=1116, avg=966.02, stdev=69.74 00:10:32.529 lat (usec): min=594, max=1141, avg=991.27, stdev=70.02 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 758], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 914], 00:10:32.529 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:10:32.529 | 70.00th=[ 1004], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:32.529 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1123], 99.95th=[ 1123], 00:10:32.529 | 99.99th=[ 1123] 00:10:32.529 write: IOPS=826, BW=3305KiB/s (3384kB/s)(3308KiB/1001msec); 0 zone resets 00:10:32.529 slat (nsec): min=9513, max=63221, avg=27391.01, stdev=10339.71 00:10:32.529 clat (usec): min=153, max=870, avg=556.75, stdev=138.23 00:10:32.529 lat (usec): min=185, max=907, avg=584.15, stdev=142.90 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 229], 5.00th=[ 293], 10.00th=[ 355], 20.00th=[ 445], 00:10:32.529 | 30.00th=[ 482], 40.00th=[ 537], 50.00th=[ 578], 60.00th=[ 603], 00:10:32.529 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 750], 00:10:32.529 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:10:32.529 | 99.99th=[ 873] 00:10:32.529 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.529 lat (usec) : 250=1.64%, 500=19.27%, 750=38.09%, 1000=28.53% 00:10:32.529 lat (msec) : 2=12.47% 00:10:32.529 cpu : usr=1.50%, sys=4.00%, ctx=1340, majf=0, minf=1 00:10:32.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 issued rwts: total=512,827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.529 job3: (groupid=0, jobs=1): err= 0: pid=1994658: Fri Dec 6 18:22:26 2024 00:10:32.529 read: IOPS=16, BW=67.0KiB/s (68.6kB/s)(68.0KiB/1015msec) 00:10:32.529 slat (nsec): min=27464, max=32606, avg=28116.76, stdev=1196.38 00:10:32.529 clat (usec): min=863, max=46111, avg=39869.85, stdev=10105.24 00:10:32.529 lat (usec): min=891, max=46144, avg=39897.97, stdev=10105.43 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 865], 5.00th=[ 865], 10.00th=[41681], 20.00th=[41681], 00:10:32.529 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:32.529 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[45876], 00:10:32.529 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:32.529 | 99.99th=[45876] 00:10:32.529 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:32.529 slat (nsec): min=9642, max=69932, avg=32153.80, stdev=10319.82 00:10:32.529 clat (usec): min=285, max=954, avg=614.50, stdev=114.59 00:10:32.529 lat (usec): min=295, max=990, avg=646.66, stdev=119.10 00:10:32.529 clat percentiles (usec): 00:10:32.529 | 1.00th=[ 338], 5.00th=[ 424], 10.00th=[ 474], 20.00th=[ 510], 00:10:32.529 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 644], 00:10:32.529 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 750], 95.00th=[ 791], 00:10:32.529 | 99.00th=[ 857], 99.50th=[ 914], 99.90th=[ 955], 99.95th=[ 955], 00:10:32.529 | 99.99th=[ 955] 00:10:32.529 bw ( KiB/s): min= 4096, max= 4096, per=44.98%, avg=4096.00, stdev= 0.00, samples=1 00:10:32.529 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:32.529 lat (usec) : 500=16.26%, 750=69.57%, 1000=11.15% 00:10:32.529 lat (msec) : 50=3.02% 00:10:32.529 cpu : usr=0.89%, sys=2.17%, ctx=532, majf=0, minf=1 00:10:32.529 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:32.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.529 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.529 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:32.529 00:10:32.529 Run status group 0 (all jobs): 00:10:32.529 READ: bw=2177KiB/s (2230kB/s), 63.9KiB/s-2046KiB/s (65.4kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1038msec 00:10:32.529 WRITE: bw=9106KiB/s (9325kB/s), 1973KiB/s-3305KiB/s (2020kB/s-3384kB/s), io=9452KiB (9679kB), run=1001-1038msec 00:10:32.529 00:10:32.529 Disk stats (read/write): 00:10:32.529 nvme0n1: ios=65/512, merge=0/0, ticks=690/196, in_queue=886, util=91.28% 00:10:32.529 nvme0n2: ios=48/512, merge=0/0, ticks=508/245, in_queue=753, util=87.53% 00:10:32.530 nvme0n3: ios=512/512, merge=0/0, ticks=509/291, in_queue=800, util=88.35% 00:10:32.530 nvme0n4: ios=34/512, merge=0/0, ticks=1391/242, in_queue=1633, util=96.89% 00:10:32.530 18:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:32.530 [global] 00:10:32.530 thread=1 00:10:32.530 invalidate=1 00:10:32.530 rw=randwrite 00:10:32.530 time_based=1 00:10:32.530 runtime=1 00:10:32.530 ioengine=libaio 00:10:32.530 direct=1 00:10:32.530 bs=4096 00:10:32.530 iodepth=1 00:10:32.530 norandommap=0 00:10:32.530 numjobs=1 00:10:32.530 00:10:32.530 verify_dump=1 00:10:32.530 verify_backlog=512 00:10:32.530 verify_state_save=0 00:10:32.530 do_verify=1 00:10:32.530 verify=crc32c-intel 00:10:32.530 [job0] 00:10:32.530 filename=/dev/nvme0n1 00:10:32.530 [job1] 00:10:32.530 filename=/dev/nvme0n2 00:10:32.530 [job2] 00:10:32.530 filename=/dev/nvme0n3 00:10:32.530 [job3] 00:10:32.530 filename=/dev/nvme0n4 00:10:32.530 Could not set queue depth (nvme0n1) 00:10:32.530 Could not set queue depth (nvme0n2) 00:10:32.530 Could not set queue depth (nvme0n3) 00:10:32.530 Could not set queue depth (nvme0n4) 00:10:32.793 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.793 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.793 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.793 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.793 fio-3.35 00:10:32.793 Starting 4 threads 00:10:34.199 00:10:34.199 job0: (groupid=0, jobs=1): err= 0: pid=1995160: Fri Dec 6 18:22:28 2024 00:10:34.199 read: IOPS=469, BW=1879KiB/s (1924kB/s)(1928KiB/1026msec) 00:10:34.199 slat (nsec): min=7665, max=57425, avg=26370.12, stdev=3991.67 00:10:34.199 clat (usec): min=466, max=42943, avg=1413.49, stdev=4607.67 00:10:34.199 lat (usec): min=492, max=42970, avg=1439.86, stdev=4607.81 00:10:34.199 clat percentiles (usec): 00:10:34.199 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 799], 20.00th=[ 840], 00:10:34.199 | 30.00th=[ 881], 40.00th=[ 906], 50.00th=[ 914], 60.00th=[ 930], 00:10:34.199 | 70.00th=[ 938], 80.00th=[ 955], 90.00th=[ 979], 95.00th=[ 996], 00:10:34.199 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:34.199 | 99.99th=[42730] 00:10:34.199 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:10:34.199 slat (nsec): min=9980, max=65877, avg=32093.23, stdev=7463.08 00:10:34.199 clat (usec): min=210, max=1743, avg=598.52, stdev=169.17 00:10:34.199 lat (usec): min=222, max=1776, avg=630.61, stdev=170.16 00:10:34.199 clat percentiles (usec): 00:10:34.199 | 1.00th=[ 253], 5.00th=[ 318], 10.00th=[ 392], 20.00th=[ 457], 00:10:34.199 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 644], 00:10:34.199 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 865], 00:10:34.199 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1745], 99.95th=[ 1745], 00:10:34.199 | 99.99th=[ 1745] 00:10:34.199 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.200 lat (usec) : 250=0.40%, 500=13.68%, 750=31.29%, 1000=52.31% 00:10:34.200 lat (msec) : 2=1.71%, 50=0.60% 00:10:34.200 cpu : usr=1.76%, sys=2.63%, ctx=995, majf=0, minf=1 00:10:34.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 issued rwts: total=482,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.200 job1: (groupid=0, jobs=1): err= 0: pid=1995164: Fri Dec 6 18:22:28 2024 00:10:34.200 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:10:34.200 slat (nsec): min=26532, max=27524, avg=26849.17, stdev=228.33 00:10:34.200 clat (usec): min=40918, max=42014, avg=41680.66, stdev=467.66 00:10:34.200 lat (usec): min=40945, max=42041, avg=41707.51, stdev=467.61 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:34.200 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:34.200 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:34.200 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:34.200 | 99.99th=[42206] 00:10:34.200 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:34.200 slat (nsec): min=8953, max=53106, avg=25116.73, stdev=11007.57 00:10:34.200 clat (usec): min=120, max=1285, avg=490.12, stdev=163.89 00:10:34.200 lat (usec): min=132, max=1301, avg=515.24, stdev=170.33 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[ 161], 5.00th=[ 243], 10.00th=[ 273], 20.00th=[ 330], 00:10:34.200 | 30.00th=[ 396], 40.00th=[ 433], 50.00th=[ 482], 60.00th=[ 545], 00:10:34.200 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 725], 00:10:34.200 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 1287], 99.95th=[ 1287], 00:10:34.200 | 99.99th=[ 1287] 00:10:34.200 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.200 lat (usec) : 250=5.85%, 500=45.28%, 750=42.26%, 1000=3.02% 00:10:34.200 lat (msec) : 2=0.19%, 50=3.40% 00:10:34.200 cpu : usr=0.69%, sys=1.87%, ctx=530, majf=0, minf=2 00:10:34.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.200 job2: (groupid=0, jobs=1): err= 0: pid=1995172: Fri Dec 6 18:22:28 2024 00:10:34.200 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:10:34.200 slat (nsec): min=27353, max=28359, avg=27797.29, stdev=264.70 00:10:34.200 clat (usec): min=1072, max=43007, avg=39610.53, stdev=9949.91 00:10:34.200 lat (usec): min=1099, max=43035, avg=39638.32, stdev=9949.91 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41157], 00:10:34.200 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:34.200 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:10:34.200 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:34.200 | 99.99th=[43254] 00:10:34.200 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:34.200 slat (nsec): min=9357, max=55434, avg=31794.78, stdev=9443.65 00:10:34.200 clat (usec): min=123, max=1882, avg=620.12, stdev=186.80 00:10:34.200 lat (usec): min=132, max=1922, avg=651.92, stdev=190.16 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[ 145], 5.00th=[ 289], 10.00th=[ 351], 20.00th=[ 474], 00:10:34.200 | 30.00th=[ 553], 40.00th=[ 594], 50.00th=[ 644], 60.00th=[ 676], 00:10:34.200 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 832], 95.00th=[ 898], 00:10:34.200 | 99.00th=[ 955], 99.50th=[ 1020], 99.90th=[ 1876], 99.95th=[ 1876], 00:10:34.200 | 99.99th=[ 1876] 00:10:34.200 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.200 lat (usec) : 250=2.46%, 500=19.09%, 750=54.25%, 1000=20.42% 00:10:34.200 lat (msec) : 2=0.76%, 50=3.02% 00:10:34.200 cpu : usr=0.79%, sys=2.27%, ctx=531, majf=0, minf=1 00:10:34.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.200 job3: (groupid=0, jobs=1): err= 0: pid=1995176: Fri Dec 6 18:22:28 2024 00:10:34.200 read: IOPS=29, BW=119KiB/s (122kB/s)(120KiB/1007msec) 00:10:34.200 slat (nsec): min=26007, max=64145, avg=27711.50, stdev=6883.82 00:10:34.200 clat (usec): min=963, max=42945, avg=22820.04, stdev=20674.79 00:10:34.200 lat (usec): min=990, max=42972, avg=22847.75, stdev=20673.33 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[ 963], 5.00th=[ 971], 10.00th=[ 1029], 20.00th=[ 1037], 00:10:34.200 | 30.00th=[ 1045], 40.00th=[ 1156], 50.00th=[41157], 60.00th=[41157], 00:10:34.200 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:34.200 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:34.200 | 99.99th=[42730] 00:10:34.200 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:34.200 slat (nsec): min=8583, max=72646, avg=30479.67, stdev=8791.07 00:10:34.200 clat (usec): min=203, max=1520, avg=589.04, stdev=168.09 00:10:34.200 lat (usec): min=222, max=1553, avg=619.52, stdev=169.67 00:10:34.200 clat percentiles (usec): 00:10:34.200 | 1.00th=[ 241], 5.00th=[ 314], 10.00th=[ 355], 20.00th=[ 445], 00:10:34.200 | 30.00th=[ 506], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:10:34.200 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 799], 95.00th=[ 857], 00:10:34.200 | 99.00th=[ 963], 99.50th=[ 1037], 99.90th=[ 1516], 99.95th=[ 1516], 00:10:34.200 | 99.99th=[ 1516] 00:10:34.200 bw ( KiB/s): min= 4096, max= 4096, per=51.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.200 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.200 lat (usec) : 250=1.85%, 500=25.83%, 750=51.85%, 1000=14.76% 00:10:34.200 lat (msec) : 2=2.77%, 50=2.95% 00:10:34.200 cpu : usr=1.49%, sys=1.69%, ctx=542, majf=0, minf=2 00:10:34.200 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.200 issued rwts: total=30,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.200 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.200 00:10:34.200 Run status group 0 (all jobs): 00:10:34.200 READ: bw=2133KiB/s (2184kB/s), 67.2KiB/s-1879KiB/s (68.8kB/s-1924kB/s), io=2188KiB (2241kB), run=1007-1026msec 00:10:34.200 WRITE: bw=7984KiB/s (8176kB/s), 1996KiB/s-2034KiB/s (2044kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1026msec 00:10:34.200 00:10:34.200 Disk stats (read/write): 00:10:34.200 nvme0n1: ios=527/512, merge=0/0, ticks=705/295, in_queue=1000, util=97.39% 00:10:34.200 nvme0n2: ios=54/512, merge=0/0, ticks=679/201, in_queue=880, util=96.53% 00:10:34.200 nvme0n3: ios=56/512, merge=0/0, ticks=651/249, in_queue=900, util=100.00% 00:10:34.200 nvme0n4: ios=55/512, merge=0/0, ticks=604/219, in_queue=823, util=96.15% 00:10:34.200 18:22:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:34.200 [global] 00:10:34.200 thread=1 00:10:34.200 invalidate=1 00:10:34.200 rw=write 00:10:34.200 time_based=1 00:10:34.200 runtime=1 00:10:34.200 ioengine=libaio 00:10:34.200 direct=1 00:10:34.200 bs=4096 00:10:34.200 iodepth=128 00:10:34.200 norandommap=0 00:10:34.200 numjobs=1 00:10:34.200 00:10:34.201 verify_dump=1 00:10:34.201 verify_backlog=512 00:10:34.201 verify_state_save=0 00:10:34.201 do_verify=1 00:10:34.201 verify=crc32c-intel 00:10:34.201 [job0] 00:10:34.201 filename=/dev/nvme0n1 00:10:34.201 [job1] 00:10:34.201 filename=/dev/nvme0n2 00:10:34.201 [job2] 00:10:34.201 filename=/dev/nvme0n3 00:10:34.201 [job3] 00:10:34.201 filename=/dev/nvme0n4 00:10:34.201 Could not set queue depth (nvme0n1) 00:10:34.201 Could not set queue depth (nvme0n2) 00:10:34.201 Could not set queue depth (nvme0n3) 00:10:34.201 Could not set queue depth (nvme0n4) 00:10:34.465 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.465 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.465 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.465 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:34.465 fio-3.35 00:10:34.465 Starting 4 threads 00:10:35.877 00:10:35.877 job0: (groupid=0, jobs=1): err= 0: pid=1995690: Fri Dec 6 18:22:30 2024 00:10:35.877 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:10:35.877 slat (nsec): min=877, max=13132k, avg=74312.67, stdev=549901.50 00:10:35.877 clat (usec): min=2851, max=37210, avg=10082.16, stdev=5408.85 00:10:35.877 lat (usec): min=2949, max=37216, avg=10156.47, stdev=5444.00 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 3818], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 6980], 00:10:35.877 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8717], 00:10:35.877 | 70.00th=[10159], 80.00th=[11731], 90.00th=[18220], 95.00th=[19530], 00:10:35.877 | 99.00th=[35914], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:35.877 | 99.99th=[36963] 00:10:35.877 write: IOPS=6414, BW=25.1MiB/s (26.3MB/s)(25.2MiB/1004msec); 0 zone resets 00:10:35.877 slat (nsec): min=1565, max=14191k, avg=76168.18, stdev=511815.35 00:10:35.877 clat (usec): min=960, max=45394, avg=10167.57, stdev=8023.57 00:10:35.877 lat (usec): min=1295, max=45396, avg=10243.74, stdev=8074.50 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 2507], 5.00th=[ 3884], 10.00th=[ 4752], 20.00th=[ 6259], 00:10:35.877 | 30.00th=[ 6652], 40.00th=[ 6980], 50.00th=[ 7439], 60.00th=[ 7767], 00:10:35.877 | 70.00th=[ 8225], 80.00th=[11076], 90.00th=[21627], 95.00th=[27132], 00:10:35.877 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[45351], 00:10:35.877 | 99.99th=[45351] 00:10:35.877 bw ( KiB/s): min=20304, max=30192, per=24.96%, avg=25248.00, stdev=6991.87, samples=2 00:10:35.877 iops : min= 5076, max= 7548, avg=6312.00, stdev=1747.97, samples=2 00:10:35.877 lat (usec) : 1000=0.01% 00:10:35.877 lat (msec) : 2=0.10%, 4=3.37%, 10=69.48%, 20=18.91%, 50=8.13% 00:10:35.877 cpu : usr=4.99%, sys=6.78%, ctx=522, majf=0, minf=1 00:10:35.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:35.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.877 issued rwts: total=6144,6440,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.877 job1: (groupid=0, jobs=1): err= 0: pid=1995691: Fri Dec 6 18:22:30 2024 00:10:35.877 read: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec) 00:10:35.877 slat (nsec): min=936, max=15941k, avg=66606.57, stdev=509618.46 00:10:35.877 clat (usec): min=2417, max=28106, avg=9150.24, stdev=3270.87 00:10:35.877 lat (usec): min=2426, max=28112, avg=9216.85, stdev=3302.85 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 3982], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7242], 00:10:35.877 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:10:35.877 | 70.00th=[ 9241], 80.00th=[10159], 90.00th=[12518], 95.00th=[15270], 00:10:35.877 | 99.00th=[23200], 99.50th=[25035], 99.90th=[27132], 99.95th=[27132], 00:10:35.877 | 99.99th=[28181] 00:10:35.877 write: IOPS=6767, BW=26.4MiB/s (27.7MB/s)(26.6MiB/1008msec); 0 zone resets 00:10:35.877 slat (nsec): min=1534, max=6692.0k, avg=70847.50, stdev=446153.92 00:10:35.877 clat (usec): min=283, max=62675, avg=9812.84, stdev=7580.91 00:10:35.877 lat (usec): min=318, max=62685, avg=9883.69, stdev=7627.85 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 1647], 5.00th=[ 3916], 10.00th=[ 4621], 20.00th=[ 6521], 00:10:35.877 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:10:35.877 | 70.00th=[ 8225], 80.00th=[11600], 90.00th=[17695], 95.00th=[22938], 00:10:35.877 | 99.00th=[48497], 99.50th=[58459], 99.90th=[59507], 99.95th=[62653], 00:10:35.877 | 99.99th=[62653] 00:10:35.877 bw ( KiB/s): min=24888, max=28664, per=26.47%, avg=26776.00, stdev=2670.04, samples=2 00:10:35.877 iops : min= 6222, max= 7166, avg=6694.00, stdev=667.51, samples=2 00:10:35.877 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.10% 00:10:35.877 lat (msec) : 2=0.54%, 4=2.40%, 10=74.71%, 20=17.15%, 50=4.59% 00:10:35.877 lat (msec) : 100=0.47% 00:10:35.877 cpu : usr=6.26%, sys=6.65%, ctx=524, majf=0, minf=2 00:10:35.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:35.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.877 issued rwts: total=6656,6822,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.877 job2: (groupid=0, jobs=1): err= 0: pid=1995697: Fri Dec 6 18:22:30 2024 00:10:35.877 read: IOPS=7019, BW=27.4MiB/s (28.8MB/s)(27.5MiB/1002msec) 00:10:35.877 slat (nsec): min=909, max=9122.9k, avg=71016.11, stdev=475225.93 00:10:35.877 clat (usec): min=740, max=28234, avg=8677.81, stdev=2057.53 00:10:35.877 lat (usec): min=3454, max=28239, avg=8748.82, stdev=2092.98 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 4080], 5.00th=[ 6128], 10.00th=[ 6587], 20.00th=[ 6849], 00:10:35.877 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9372], 00:10:35.877 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10814], 95.00th=[12256], 00:10:35.877 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15401], 99.95th=[19006], 00:10:35.877 | 99.99th=[28181] 00:10:35.877 write: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:10:35.877 slat (nsec): min=1558, max=8874.9k, avg=65853.95, stdev=362845.25 00:10:35.877 clat (usec): min=1289, max=30522, avg=9213.79, stdev=3524.47 00:10:35.877 lat (usec): min=1301, max=30529, avg=9279.65, stdev=3545.20 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 4228], 5.00th=[ 5997], 10.00th=[ 6456], 20.00th=[ 6718], 00:10:35.877 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8979], 60.00th=[ 9503], 00:10:35.877 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11469], 95.00th=[14746], 00:10:35.877 | 99.00th=[28181], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:10:35.877 | 99.99th=[30540] 00:10:35.877 bw ( KiB/s): min=25272, max=32072, per=28.35%, avg=28672.00, stdev=4808.33, samples=2 00:10:35.877 iops : min= 6318, max= 8018, avg=7168.00, stdev=1202.08, samples=2 00:10:35.877 lat (usec) : 750=0.01% 00:10:35.877 lat (msec) : 2=0.07%, 4=0.70%, 10=71.22%, 20=26.42%, 50=1.58% 00:10:35.877 cpu : usr=3.80%, sys=5.79%, ctx=916, majf=0, minf=1 00:10:35.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:35.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.877 issued rwts: total=7034,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.877 job3: (groupid=0, jobs=1): err= 0: pid=1995702: Fri Dec 6 18:22:30 2024 00:10:35.877 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:35.877 slat (nsec): min=917, max=44256k, avg=118390.52, stdev=968443.99 00:10:35.877 clat (usec): min=2691, max=74871, avg=15169.10, stdev=11272.71 00:10:35.877 lat (usec): min=2699, max=74875, avg=15287.49, stdev=11322.65 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 5735], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9241], 00:10:35.877 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11731], 60.00th=[12256], 00:10:35.877 | 70.00th=[13829], 80.00th=[17957], 90.00th=[22414], 95.00th=[38536], 00:10:35.877 | 99.00th=[67634], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:35.877 | 99.99th=[74974] 00:10:35.877 write: IOPS=5041, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:10:35.877 slat (nsec): min=1573, max=8766.0k, avg=84695.52, stdev=493262.61 00:10:35.877 clat (usec): min=1227, max=26887, avg=11347.80, stdev=3728.72 00:10:35.877 lat (usec): min=1239, max=26894, avg=11432.49, stdev=3730.28 00:10:35.877 clat percentiles (usec): 00:10:35.877 | 1.00th=[ 3851], 5.00th=[ 7111], 10.00th=[ 8848], 20.00th=[ 9110], 00:10:35.877 | 30.00th=[ 9372], 40.00th=[10290], 50.00th=[11076], 60.00th=[11469], 00:10:35.877 | 70.00th=[11731], 80.00th=[11994], 90.00th=[14615], 95.00th=[20317], 00:10:35.877 | 99.00th=[26346], 99.50th=[26870], 99.90th=[26870], 99.95th=[26870], 00:10:35.877 | 99.99th=[26870] 00:10:35.877 bw ( KiB/s): min=18952, max=20480, per=19.49%, avg=19716.00, stdev=1080.46, samples=2 00:10:35.877 iops : min= 4738, max= 5120, avg=4929.00, stdev=270.11, samples=2 00:10:35.877 lat (msec) : 2=0.18%, 4=0.61%, 10=34.13%, 20=56.47%, 50=6.65% 00:10:35.877 lat (msec) : 100=1.96% 00:10:35.877 cpu : usr=3.59%, sys=4.49%, ctx=424, majf=0, minf=1 00:10:35.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:35.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:35.877 issued rwts: total=4608,5057,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:35.877 00:10:35.877 Run status group 0 (all jobs): 00:10:35.877 READ: bw=94.7MiB/s (99.3MB/s), 17.9MiB/s-27.4MiB/s (18.8MB/s-28.8MB/s), io=95.5MiB (100MB), run=1002-1008msec 00:10:35.877 WRITE: bw=98.8MiB/s (104MB/s), 19.7MiB/s-27.9MiB/s (20.7MB/s-29.3MB/s), io=99.6MiB (104MB), run=1002-1008msec 00:10:35.877 00:10:35.877 Disk stats (read/write): 00:10:35.877 nvme0n1: ios=5682/6111, merge=0/0, ticks=43075/40818, in_queue=83893, util=91.88% 00:10:35.877 nvme0n2: ios=5672/6015, merge=0/0, ticks=45494/48592, in_queue=94086, util=96.43% 00:10:35.877 nvme0n3: ios=5426/5632, merge=0/0, ticks=27183/28326, in_queue=55509, util=88.28% 00:10:35.877 nvme0n4: ios=4096/4476, merge=0/0, ticks=19696/18466, in_queue=38162, util=88.78% 00:10:35.877 18:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:35.877 [global] 00:10:35.877 thread=1 00:10:35.877 invalidate=1 00:10:35.877 rw=randwrite 00:10:35.877 time_based=1 00:10:35.877 runtime=1 00:10:35.877 ioengine=libaio 00:10:35.877 direct=1 00:10:35.877 bs=4096 00:10:35.877 iodepth=128 00:10:35.877 norandommap=0 00:10:35.877 numjobs=1 00:10:35.877 00:10:35.877 verify_dump=1 00:10:35.877 verify_backlog=512 00:10:35.877 verify_state_save=0 00:10:35.877 do_verify=1 00:10:35.877 verify=crc32c-intel 00:10:35.877 [job0] 00:10:35.877 filename=/dev/nvme0n1 00:10:35.877 [job1] 00:10:35.877 filename=/dev/nvme0n2 00:10:35.877 [job2] 00:10:35.877 filename=/dev/nvme0n3 00:10:35.877 [job3] 00:10:35.877 filename=/dev/nvme0n4 00:10:35.878 Could not set queue depth (nvme0n1) 00:10:35.878 Could not set queue depth (nvme0n2) 00:10:35.878 Could not set queue depth (nvme0n3) 00:10:35.878 Could not set queue depth (nvme0n4) 00:10:36.140 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.140 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.140 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.140 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:36.140 fio-3.35 00:10:36.140 Starting 4 threads 00:10:37.559 00:10:37.559 job0: (groupid=0, jobs=1): err= 0: pid=1996209: Fri Dec 6 18:22:31 2024 00:10:37.559 read: IOPS=5340, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1008msec) 00:10:37.559 slat (nsec): min=940, max=17458k, avg=79398.46, stdev=733031.61 00:10:37.559 clat (usec): min=1088, max=40072, avg=11514.36, stdev=5362.29 00:10:37.559 lat (usec): min=3038, max=40095, avg=11593.76, stdev=5425.17 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 4178], 5.00th=[ 5538], 10.00th=[ 6849], 20.00th=[ 7767], 00:10:37.559 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[10421], 00:10:37.559 | 70.00th=[12649], 80.00th=[15664], 90.00th=[19530], 95.00th=[24511], 00:10:37.559 | 99.00th=[25822], 99.50th=[28705], 99.90th=[32900], 99.95th=[36963], 00:10:37.559 | 99.99th=[40109] 00:10:37.559 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:10:37.559 slat (nsec): min=1506, max=11377k, avg=77577.20, stdev=496236.33 00:10:37.559 clat (usec): min=632, max=46790, avg=11723.29, stdev=8335.75 00:10:37.559 lat (usec): min=647, max=46799, avg=11800.87, stdev=8389.70 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 1876], 5.00th=[ 3359], 10.00th=[ 4293], 20.00th=[ 5866], 00:10:37.559 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 8029], 60.00th=[10945], 00:10:37.559 | 70.00th=[14091], 80.00th=[15139], 90.00th=[24773], 95.00th=[30278], 00:10:37.559 | 99.00th=[40633], 99.50th=[43254], 99.90th=[45876], 99.95th=[46924], 00:10:37.559 | 99.99th=[46924] 00:10:37.559 bw ( KiB/s): min=20720, max=24336, per=21.67%, avg=22528.00, stdev=2556.90, samples=2 00:10:37.559 iops : min= 5180, max= 6084, avg=5632.00, stdev=639.22, samples=2 00:10:37.559 lat (usec) : 750=0.05%, 1000=0.01% 00:10:37.559 lat (msec) : 2=0.65%, 4=2.86%, 10=50.80%, 20=34.16%, 50=11.46% 00:10:37.559 cpu : usr=4.17%, sys=6.45%, ctx=419, majf=0, minf=1 00:10:37.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:37.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.559 issued rwts: total=5383,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.559 job1: (groupid=0, jobs=1): err= 0: pid=1996210: Fri Dec 6 18:22:31 2024 00:10:37.559 read: IOPS=8612, BW=33.6MiB/s (35.3MB/s)(33.8MiB/1005msec) 00:10:37.559 slat (nsec): min=884, max=3688.4k, avg=57856.69, stdev=362168.42 00:10:37.559 clat (usec): min=1461, max=11356, avg=7329.53, stdev=905.12 00:10:37.559 lat (usec): min=4280, max=11370, avg=7387.39, stdev=947.36 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 4883], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6849], 00:10:37.559 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:10:37.559 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8455], 95.00th=[ 8848], 00:10:37.559 | 99.00th=[10028], 99.50th=[10290], 99.90th=[10814], 99.95th=[10945], 00:10:37.559 | 99.99th=[11338] 00:10:37.559 write: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec); 0 zone resets 00:10:37.559 slat (nsec): min=1485, max=13702k, avg=53561.44, stdev=299784.12 00:10:37.559 clat (usec): min=3702, max=27619, avg=7172.16, stdev=1673.23 00:10:37.559 lat (usec): min=3703, max=27627, avg=7225.72, stdev=1691.90 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 4424], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 6718], 00:10:37.559 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7111], 00:10:37.559 | 70.00th=[ 7177], 80.00th=[ 7373], 90.00th=[ 7832], 95.00th=[ 8586], 00:10:37.559 | 99.00th=[10028], 99.50th=[19530], 99.90th=[27657], 99.95th=[27657], 00:10:37.559 | 99.99th=[27657] 00:10:37.559 bw ( KiB/s): min=34424, max=35208, per=33.50%, avg=34816.00, stdev=554.37, samples=2 00:10:37.559 iops : min= 8606, max= 8802, avg=8704.00, stdev=138.59, samples=2 00:10:37.559 lat (msec) : 2=0.01%, 4=0.07%, 10=99.00%, 20=0.73%, 50=0.20% 00:10:37.559 cpu : usr=4.08%, sys=7.47%, ctx=988, majf=0, minf=1 00:10:37.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:37.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.559 issued rwts: total=8656,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.559 job2: (groupid=0, jobs=1): err= 0: pid=1996211: Fri Dec 6 18:22:31 2024 00:10:37.559 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:10:37.559 slat (nsec): min=957, max=9219.6k, avg=86139.21, stdev=620763.21 00:10:37.559 clat (usec): min=3194, max=70339, avg=10621.21, stdev=6035.26 00:10:37.559 lat (usec): min=3201, max=70347, avg=10707.35, stdev=6103.72 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 7111], 20.00th=[ 7570], 00:10:37.559 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:10:37.559 | 70.00th=[10552], 80.00th=[12387], 90.00th=[15139], 95.00th=[20055], 00:10:37.559 | 99.00th=[32375], 99.50th=[56361], 99.90th=[69731], 99.95th=[69731], 00:10:37.559 | 99.99th=[70779] 00:10:37.559 write: IOPS=6508, BW=25.4MiB/s (26.7MB/s)(25.6MiB/1007msec); 0 zone resets 00:10:37.559 slat (nsec): min=1602, max=9629.1k, avg=66312.40, stdev=459250.49 00:10:37.559 clat (usec): min=1067, max=70348, avg=9486.29, stdev=7126.77 00:10:37.559 lat (usec): min=1075, max=70357, avg=9552.60, stdev=7154.14 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 3130], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5997], 00:10:37.559 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8160], 00:10:37.559 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[14091], 95.00th=[15533], 00:10:37.559 | 99.00th=[60556], 99.50th=[64226], 99.90th=[64750], 99.95th=[70779], 00:10:37.559 | 99.99th=[70779] 00:10:37.559 bw ( KiB/s): min=24576, max=26832, per=24.73%, avg=25704.00, stdev=1595.23, samples=2 00:10:37.559 iops : min= 6144, max= 6708, avg=6426.00, stdev=398.81, samples=2 00:10:37.559 lat (msec) : 2=0.09%, 4=1.29%, 10=65.96%, 20=28.98%, 50=2.73% 00:10:37.559 lat (msec) : 100=0.94% 00:10:37.559 cpu : usr=3.98%, sys=8.05%, ctx=494, majf=0, minf=1 00:10:37.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:37.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.559 issued rwts: total=6144,6554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.559 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.559 job3: (groupid=0, jobs=1): err= 0: pid=1996214: Fri Dec 6 18:22:31 2024 00:10:37.559 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:37.559 slat (nsec): min=920, max=17220k, avg=87675.25, stdev=616693.65 00:10:37.559 clat (usec): min=5353, max=44927, avg=11307.96, stdev=5930.70 00:10:37.559 lat (usec): min=5373, max=44978, avg=11395.63, stdev=5984.68 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 6128], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8225], 00:10:37.559 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[10028], 00:10:37.559 | 70.00th=[10814], 80.00th=[11469], 90.00th=[16909], 95.00th=[23987], 00:10:37.559 | 99.00th=[35914], 99.50th=[39584], 99.90th=[42206], 99.95th=[42206], 00:10:37.559 | 99.99th=[44827] 00:10:37.559 write: IOPS=5281, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 00:10:37.559 slat (nsec): min=1522, max=8467.3k, avg=97865.49, stdev=531026.26 00:10:37.559 clat (usec): min=711, max=72475, avg=13106.01, stdev=11926.34 00:10:37.559 lat (usec): min=719, max=72483, avg=13203.87, stdev=12006.81 00:10:37.559 clat percentiles (usec): 00:10:37.559 | 1.00th=[ 2474], 5.00th=[ 5800], 10.00th=[ 7308], 20.00th=[ 7963], 00:10:37.559 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:10:37.559 | 70.00th=[12256], 80.00th=[14877], 90.00th=[17695], 95.00th=[45351], 00:10:37.559 | 99.00th=[64750], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:10:37.559 | 99.99th=[72877] 00:10:37.559 bw ( KiB/s): min=16832, max=24576, per=19.92%, avg=20704.00, stdev=5475.83, samples=2 00:10:37.559 iops : min= 4208, max= 6144, avg=5176.00, stdev=1368.96, samples=2 00:10:37.559 lat (usec) : 750=0.03% 00:10:37.559 lat (msec) : 2=0.24%, 4=0.83%, 10=61.76%, 20=28.50%, 50=6.44% 00:10:37.559 lat (msec) : 100=2.21% 00:10:37.559 cpu : usr=3.29%, sys=5.68%, ctx=569, majf=0, minf=1 00:10:37.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:37.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:37.559 issued rwts: total=5120,5303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.560 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:37.560 00:10:37.560 Run status group 0 (all jobs): 00:10:37.560 READ: bw=98.1MiB/s (103MB/s), 19.9MiB/s-33.6MiB/s (20.9MB/s-35.3MB/s), io=98.8MiB (104MB), run=1004-1008msec 00:10:37.560 WRITE: bw=102MiB/s (106MB/s), 20.6MiB/s-33.8MiB/s (21.6MB/s-35.5MB/s), io=102MiB (107MB), run=1004-1008msec 00:10:37.560 00:10:37.560 Disk stats (read/write): 00:10:37.560 nvme0n1: ios=4447/4608, merge=0/0, ticks=48970/52207, in_queue=101177, util=90.48% 00:10:37.560 nvme0n2: ios=7160/7168, merge=0/0, ticks=25624/23369, in_queue=48993, util=86.18% 00:10:37.560 nvme0n3: ios=4733/5120, merge=0/0, ticks=51365/48039, in_queue=99404, util=88.23% 00:10:37.560 nvme0n4: ios=4654/4631, merge=0/0, ticks=24421/26476, in_queue=50897, util=95.59% 00:10:37.560 18:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:37.560 18:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1996548 00:10:37.560 18:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:37.560 18:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:37.560 [global] 00:10:37.560 thread=1 00:10:37.560 invalidate=1 00:10:37.560 rw=read 00:10:37.560 time_based=1 00:10:37.560 runtime=10 00:10:37.560 ioengine=libaio 00:10:37.560 direct=1 00:10:37.560 bs=4096 00:10:37.560 iodepth=1 00:10:37.560 norandommap=1 00:10:37.560 numjobs=1 00:10:37.560 00:10:37.560 [job0] 00:10:37.560 filename=/dev/nvme0n1 00:10:37.560 [job1] 00:10:37.560 filename=/dev/nvme0n2 00:10:37.560 [job2] 00:10:37.560 filename=/dev/nvme0n3 00:10:37.560 [job3] 00:10:37.560 filename=/dev/nvme0n4 00:10:37.560 Could not set queue depth (nvme0n1) 00:10:37.560 Could not set queue depth (nvme0n2) 00:10:37.560 Could not set queue depth (nvme0n3) 00:10:37.560 Could not set queue depth (nvme0n4) 00:10:37.826 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.826 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.826 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.826 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:37.826 fio-3.35 00:10:37.826 Starting 4 threads 00:10:40.372 18:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:40.633 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:40.633 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=282624, buflen=4096 00:10:40.633 fio: pid=1996742, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.633 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7286784, buflen=4096 00:10:40.633 fio: pid=1996741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.633 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.633 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:40.892 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13520896, buflen=4096 00:10:40.892 fio: pid=1996739, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.892 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.892 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:41.153 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=9842688, buflen=4096 00:10:41.153 fio: pid=1996740, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:41.153 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.153 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:41.153 00:10:41.153 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1996739: Fri Dec 6 18:22:35 2024 00:10:41.153 read: IOPS=1110, BW=4440KiB/s (4546kB/s)(12.9MiB/2974msec) 00:10:41.153 slat (usec): min=6, max=31114, avg=37.37, stdev=600.70 00:10:41.153 clat (usec): min=206, max=1646, avg=851.07, stdev=181.02 00:10:41.153 lat (usec): min=214, max=32042, avg=888.44, stdev=630.77 00:10:41.153 clat percentiles (usec): 00:10:41.153 | 1.00th=[ 371], 5.00th=[ 611], 10.00th=[ 668], 20.00th=[ 742], 00:10:41.153 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 832], 00:10:41.153 | 70.00th=[ 938], 80.00th=[ 1012], 90.00th=[ 1123], 95.00th=[ 1172], 00:10:41.153 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1516], 99.95th=[ 1549], 00:10:41.153 | 99.99th=[ 1647] 00:10:41.153 bw ( KiB/s): min= 3632, max= 5112, per=47.43%, avg=4521.60, stdev=732.68, samples=5 00:10:41.153 iops : min= 908, max= 1278, avg=1130.40, stdev=183.17, samples=5 00:10:41.153 lat (usec) : 250=0.30%, 500=2.57%, 750=20.11%, 1000=55.33% 00:10:41.153 lat (msec) : 2=21.65% 00:10:41.153 cpu : usr=1.11%, sys=3.06%, ctx=3304, majf=0, minf=2 00:10:41.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.153 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.153 issued rwts: total=3302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.153 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1996740: Fri Dec 6 18:22:35 2024 00:10:41.153 read: IOPS=758, BW=3033KiB/s (3106kB/s)(9612KiB/3169msec) 00:10:41.153 slat (usec): min=6, max=15706, avg=37.22, stdev=385.59 00:10:41.153 clat (usec): min=181, max=46281, avg=1275.48, stdev=5533.34 00:10:41.153 lat (usec): min=188, max=56982, avg=1309.91, stdev=5604.49 00:10:41.153 clat percentiles (usec): 00:10:41.153 | 1.00th=[ 245], 5.00th=[ 293], 10.00th=[ 347], 20.00th=[ 420], 00:10:41.153 | 30.00th=[ 474], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 545], 00:10:41.153 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 742], 95.00th=[ 824], 00:10:41.153 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:10:41.153 | 99.99th=[46400] 00:10:41.153 bw ( KiB/s): min= 304, max= 7344, per=33.44%, avg=3188.67, stdev=3142.49, samples=6 00:10:41.153 iops : min= 76, max= 1836, avg=797.17, stdev=785.62, samples=6 00:10:41.153 lat (usec) : 250=1.21%, 500=38.89%, 750=50.37%, 1000=6.20% 00:10:41.153 lat (msec) : 2=1.46%, 10=0.04%, 50=1.79% 00:10:41.153 cpu : usr=0.95%, sys=2.05%, ctx=2409, majf=0, minf=2 00:10:41.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.153 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.153 issued rwts: total=2404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.153 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1996741: Fri Dec 6 18:22:35 2024 00:10:41.153 read: IOPS=638, BW=2552KiB/s (2614kB/s)(7116KiB/2788msec) 00:10:41.153 slat (nsec): min=6882, max=61058, avg=25218.60, stdev=5871.90 00:10:41.153 clat (usec): min=217, max=42671, avg=1523.68, stdev=5105.01 00:10:41.153 lat (usec): min=225, max=42696, avg=1548.90, stdev=5105.05 00:10:41.153 clat percentiles (usec): 00:10:41.153 | 1.00th=[ 355], 5.00th=[ 519], 10.00th=[ 578], 20.00th=[ 660], 00:10:41.153 | 30.00th=[ 725], 40.00th=[ 791], 50.00th=[ 848], 60.00th=[ 930], 00:10:41.153 | 70.00th=[ 1090], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1254], 00:10:41.153 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42730], 00:10:41.153 | 99.99th=[42730] 00:10:41.153 bw ( KiB/s): min= 1192, max= 5328, per=23.92%, avg=2280.00, stdev=1716.60, samples=5 00:10:41.153 iops : min= 298, max= 1332, avg=570.00, stdev=429.15, samples=5 00:10:41.154 lat (usec) : 250=0.11%, 500=3.99%, 750=29.72%, 1000=29.21% 00:10:41.154 lat (msec) : 2=35.34%, 50=1.57% 00:10:41.154 cpu : usr=0.57%, sys=1.97%, ctx=1780, majf=0, minf=2 00:10:41.154 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.154 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.154 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.154 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1996742: Fri Dec 6 18:22:35 2024 00:10:41.154 read: IOPS=26, BW=105KiB/s (107kB/s)(276KiB/2641msec) 00:10:41.154 slat (nsec): min=8492, max=42057, avg=25449.57, stdev=2930.22 00:10:41.154 clat (usec): min=548, max=42986, avg=37929.70, stdev=12494.21 00:10:41.154 lat (usec): min=590, max=43012, avg=37955.15, stdev=12493.27 00:10:41.154 clat percentiles (usec): 00:10:41.154 | 1.00th=[ 545], 5.00th=[ 1074], 10.00th=[ 1270], 20.00th=[41681], 00:10:41.154 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:41.154 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:10:41.154 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:41.154 | 99.99th=[42730] 00:10:41.154 bw ( KiB/s): min= 96, max= 128, per=1.10%, avg=105.60, stdev=13.15, samples=5 00:10:41.154 iops : min= 24, max= 32, avg=26.40, stdev= 3.29, samples=5 00:10:41.154 lat (usec) : 750=1.43%, 1000=1.43% 00:10:41.154 lat (msec) : 2=7.14%, 50=88.57% 00:10:41.154 cpu : usr=0.04%, sys=0.04%, ctx=70, majf=0, minf=1 00:10:41.154 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.154 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.154 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.154 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.154 00:10:41.154 Run status group 0 (all jobs): 00:10:41.154 READ: bw=9532KiB/s (9761kB/s), 105KiB/s-4440KiB/s (107kB/s-4546kB/s), io=29.5MiB (30.9MB), run=2641-3169msec 00:10:41.154 00:10:41.154 Disk stats (read/write): 00:10:41.154 nvme0n1: ios=3186/0, merge=0/0, ticks=2632/0, in_queue=2632, util=93.29% 00:10:41.154 nvme0n2: ios=2398/0, merge=0/0, ticks=2924/0, in_queue=2924, util=95.11% 00:10:41.154 nvme0n3: ios=1540/0, merge=0/0, ticks=2485/0, in_queue=2485, util=95.99% 00:10:41.154 nvme0n4: ios=68/0, merge=0/0, ticks=2576/0, in_queue=2576, util=96.42% 00:10:41.154 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.154 18:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:41.413 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.413 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:41.673 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.673 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1996548 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:41.933 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:42.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:42.193 nvmf hotplug test: fio failed as expected 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.193 18:22:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.193 rmmod nvme_tcp 00:10:42.451 rmmod nvme_fabrics 00:10:42.451 rmmod nvme_keyring 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1992940 ']' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1992940 ']' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1992940' 00:10:42.451 killing process with pid 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1992940 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.451 18:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.992 00:10:44.992 real 0m29.281s 00:10:44.992 user 2m36.664s 00:10:44.992 sys 0m9.460s 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.992 ************************************ 00:10:44.992 END TEST nvmf_fio_target 00:10:44.992 ************************************ 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.992 ************************************ 00:10:44.992 START TEST nvmf_bdevio 00:10:44.992 ************************************ 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.992 * Looking for test storage... 00:10:44.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.992 --rc genhtml_branch_coverage=1 00:10:44.992 --rc genhtml_function_coverage=1 00:10:44.992 --rc genhtml_legend=1 00:10:44.992 --rc geninfo_all_blocks=1 00:10:44.992 --rc geninfo_unexecuted_blocks=1 00:10:44.992 00:10:44.992 ' 00:10:44.992 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.993 --rc genhtml_branch_coverage=1 00:10:44.993 --rc genhtml_function_coverage=1 00:10:44.993 --rc genhtml_legend=1 00:10:44.993 --rc geninfo_all_blocks=1 00:10:44.993 --rc geninfo_unexecuted_blocks=1 00:10:44.993 00:10:44.993 ' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.993 --rc genhtml_branch_coverage=1 00:10:44.993 --rc genhtml_function_coverage=1 00:10:44.993 --rc genhtml_legend=1 00:10:44.993 --rc geninfo_all_blocks=1 00:10:44.993 --rc geninfo_unexecuted_blocks=1 00:10:44.993 00:10:44.993 ' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.993 --rc genhtml_branch_coverage=1 00:10:44.993 --rc genhtml_function_coverage=1 00:10:44.993 --rc genhtml_legend=1 00:10:44.993 --rc geninfo_all_blocks=1 00:10:44.993 --rc geninfo_unexecuted_blocks=1 00:10:44.993 00:10:44.993 ' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.993 18:22:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:53.133 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:53.133 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:53.133 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:53.133 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.133 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.133 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:10:53.133 00:10:53.133 --- 10.0.0.2 ping statistics --- 00:10:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.133 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:10:53.133 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.133 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.133 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:10:53.133 00:10:53.133 --- 10.0.0.1 ping statistics --- 00:10:53.133 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.133 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.134 18:22:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2002001 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2002001 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2002001 ']' 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.134 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.134 [2024-12-06 18:22:47.114175] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:10:53.134 [2024-12-06 18:22:47.114243] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.134 [2024-12-06 18:22:47.213607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.134 [2024-12-06 18:22:47.266485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.134 [2024-12-06 18:22:47.266539] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.134 [2024-12-06 18:22:47.266548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.134 [2024-12-06 18:22:47.266556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.134 [2024-12-06 18:22:47.266562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.134 [2024-12-06 18:22:47.269013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:53.134 [2024-12-06 18:22:47.269175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:53.134 [2024-12-06 18:22:47.269335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.134 [2024-12-06 18:22:47.269335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 18:22:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 [2024-12-06 18:22:47.994280] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 Malloc0 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.395 [2024-12-06 18:22:48.071205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:53.395 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:53.395 { 00:10:53.395 "params": { 00:10:53.395 "name": "Nvme$subsystem", 00:10:53.395 "trtype": "$TEST_TRANSPORT", 00:10:53.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:53.395 "adrfam": "ipv4", 00:10:53.395 "trsvcid": "$NVMF_PORT", 00:10:53.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:53.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:53.396 "hdgst": ${hdgst:-false}, 00:10:53.396 "ddgst": ${ddgst:-false} 00:10:53.396 }, 00:10:53.396 "method": "bdev_nvme_attach_controller" 00:10:53.396 } 00:10:53.396 EOF 00:10:53.396 )") 00:10:53.396 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:53.396 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:53.396 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:53.396 18:22:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:53.396 "params": { 00:10:53.396 "name": "Nvme1", 00:10:53.396 "trtype": "tcp", 00:10:53.396 "traddr": "10.0.0.2", 00:10:53.396 "adrfam": "ipv4", 00:10:53.396 "trsvcid": "4420", 00:10:53.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:53.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:53.396 "hdgst": false, 00:10:53.396 "ddgst": false 00:10:53.396 }, 00:10:53.396 "method": "bdev_nvme_attach_controller" 00:10:53.396 }' 00:10:53.396 [2024-12-06 18:22:48.129967] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:10:53.396 [2024-12-06 18:22:48.130031] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002134 ] 00:10:53.656 [2024-12-06 18:22:48.224397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.656 [2024-12-06 18:22:48.280758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.656 [2024-12-06 18:22:48.280928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.656 [2024-12-06 18:22:48.280928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.916 I/O targets: 00:10:53.916 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:53.916 00:10:53.916 00:10:53.916 CUnit - A unit testing framework for C - Version 2.1-3 00:10:53.916 http://cunit.sourceforge.net/ 00:10:53.916 00:10:53.916 00:10:53.916 Suite: bdevio tests on: Nvme1n1 00:10:53.916 Test: blockdev write read block ...passed 00:10:53.916 Test: blockdev write zeroes read block ...passed 00:10:53.916 Test: blockdev write zeroes read no split ...passed 00:10:53.916 Test: blockdev write zeroes read split ...passed 00:10:53.916 Test: blockdev write zeroes read split partial ...passed 00:10:53.916 Test: blockdev reset ...[2024-12-06 18:22:48.636600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:53.916 [2024-12-06 18:22:48.636715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363580 (9): Bad file descriptor 00:10:53.916 [2024-12-06 18:22:48.691655] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:53.916 passed 00:10:53.916 Test: blockdev write read 8 blocks ...passed 00:10:53.916 Test: blockdev write read size > 128k ...passed 00:10:53.916 Test: blockdev write read invalid size ...passed 00:10:54.177 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:54.177 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:54.177 Test: blockdev write read max offset ...passed 00:10:54.177 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:54.177 Test: blockdev writev readv 8 blocks ...passed 00:10:54.177 Test: blockdev writev readv 30 x 1block ...passed 00:10:54.177 Test: blockdev writev readv block ...passed 00:10:54.177 Test: blockdev writev readv size > 128k ...passed 00:10:54.177 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:54.177 Test: blockdev comparev and writev ...[2024-12-06 18:22:48.958649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.958700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.958718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.958726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.959277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.959295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.959309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.959327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.959822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.959838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.959852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.959860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.960398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.177 [2024-12-06 18:22:48.960412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:54.177 [2024-12-06 18:22:48.960426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:54.178 [2024-12-06 18:22:48.960434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:54.438 passed 00:10:54.438 Test: blockdev nvme passthru rw ...passed 00:10:54.438 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:22:49.045254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.438 [2024-12-06 18:22:49.045272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:54.438 [2024-12-06 18:22:49.045629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.438 [2024-12-06 18:22:49.045648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:54.438 [2024-12-06 18:22:49.046004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.438 [2024-12-06 18:22:49.046016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:54.438 [2024-12-06 18:22:49.046379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:54.438 [2024-12-06 18:22:49.046393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:54.438 passed 00:10:54.438 Test: blockdev nvme admin passthru ...passed 00:10:54.438 Test: blockdev copy ...passed 00:10:54.438 00:10:54.438 Run Summary: Type Total Ran Passed Failed Inactive 00:10:54.438 suites 1 1 n/a 0 0 00:10:54.438 tests 23 23 23 0 0 00:10:54.438 asserts 152 152 152 0 n/a 00:10:54.438 00:10:54.438 Elapsed time = 1.298 seconds 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.699 rmmod nvme_tcp 00:10:54.699 rmmod nvme_fabrics 00:10:54.699 rmmod nvme_keyring 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:54.699 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2002001 ']' 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2002001 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2002001 ']' 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2002001 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2002001 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2002001' 00:10:54.700 killing process with pid 2002001 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2002001 00:10:54.700 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2002001 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.960 18:22:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.875 00:10:56.875 real 0m12.191s 00:10:56.875 user 0m13.370s 00:10:56.875 sys 0m6.172s 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.875 ************************************ 00:10:56.875 END TEST nvmf_bdevio 00:10:56.875 ************************************ 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:56.875 00:10:56.875 real 5m4.761s 00:10:56.875 user 11m52.757s 00:10:56.875 sys 1m52.400s 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.875 18:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.875 ************************************ 00:10:56.875 END TEST nvmf_target_core 00:10:56.875 ************************************ 00:10:57.137 18:22:51 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.137 18:22:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.138 18:22:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.138 18:22:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.138 ************************************ 00:10:57.138 START TEST nvmf_target_extra 00:10:57.138 ************************************ 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.138 * Looking for test storage... 00:10:57.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.138 --rc genhtml_branch_coverage=1 00:10:57.138 --rc genhtml_function_coverage=1 00:10:57.138 --rc genhtml_legend=1 00:10:57.138 --rc geninfo_all_blocks=1 00:10:57.138 --rc geninfo_unexecuted_blocks=1 00:10:57.138 00:10:57.138 ' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.138 --rc genhtml_branch_coverage=1 00:10:57.138 --rc genhtml_function_coverage=1 00:10:57.138 --rc genhtml_legend=1 00:10:57.138 --rc geninfo_all_blocks=1 00:10:57.138 --rc geninfo_unexecuted_blocks=1 00:10:57.138 00:10:57.138 ' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.138 --rc genhtml_branch_coverage=1 00:10:57.138 --rc genhtml_function_coverage=1 00:10:57.138 --rc genhtml_legend=1 00:10:57.138 --rc geninfo_all_blocks=1 00:10:57.138 --rc geninfo_unexecuted_blocks=1 00:10:57.138 00:10:57.138 ' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.138 --rc genhtml_branch_coverage=1 00:10:57.138 --rc genhtml_function_coverage=1 00:10:57.138 --rc genhtml_legend=1 00:10:57.138 --rc geninfo_all_blocks=1 00:10:57.138 --rc geninfo_unexecuted_blocks=1 00:10:57.138 00:10:57.138 ' 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.138 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.450 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.451 ************************************ 00:10:57.451 START TEST nvmf_example 00:10:57.451 ************************************ 00:10:57.451 18:22:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:57.451 * Looking for test storage... 00:10:57.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.451 --rc genhtml_branch_coverage=1 00:10:57.451 --rc genhtml_function_coverage=1 00:10:57.451 --rc genhtml_legend=1 00:10:57.451 --rc geninfo_all_blocks=1 00:10:57.451 --rc geninfo_unexecuted_blocks=1 00:10:57.451 00:10:57.451 ' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.451 --rc genhtml_branch_coverage=1 00:10:57.451 --rc genhtml_function_coverage=1 00:10:57.451 --rc genhtml_legend=1 00:10:57.451 --rc geninfo_all_blocks=1 00:10:57.451 --rc geninfo_unexecuted_blocks=1 00:10:57.451 00:10:57.451 ' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.451 --rc genhtml_branch_coverage=1 00:10:57.451 --rc genhtml_function_coverage=1 00:10:57.451 --rc genhtml_legend=1 00:10:57.451 --rc geninfo_all_blocks=1 00:10:57.451 --rc geninfo_unexecuted_blocks=1 00:10:57.451 00:10:57.451 ' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:57.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.451 --rc genhtml_branch_coverage=1 00:10:57.451 --rc genhtml_function_coverage=1 00:10:57.451 --rc genhtml_legend=1 00:10:57.451 --rc geninfo_all_blocks=1 00:10:57.451 --rc geninfo_unexecuted_blocks=1 00:10:57.451 00:10:57.451 ' 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.451 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.775 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.776 18:22:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.994 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.994 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.994 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:11:05.994 00:11:05.994 --- 10.0.0.2 ping statistics --- 00:11:05.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.994 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:11:05.994 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:11:05.994 00:11:05.994 --- 10.0.0.1 ping statistics --- 00:11:05.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.995 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2006865 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2006865 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2006865 ']' 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.995 18:22:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:05.995 18:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:18.221 Initializing NVMe Controllers 00:11:18.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:18.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:18.221 Initialization complete. Launching workers. 00:11:18.221 ======================================================== 00:11:18.221 Latency(us) 00:11:18.221 Device Information : IOPS MiB/s Average min max 00:11:18.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18421.06 71.96 3473.84 608.37 15554.84 00:11:18.221 ======================================================== 00:11:18.221 Total : 18421.06 71.96 3473.84 608.37 15554.84 00:11:18.221 00:11:18.221 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.222 18:23:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.222 rmmod nvme_tcp 00:11:18.222 rmmod nvme_fabrics 00:11:18.222 rmmod nvme_keyring 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2006865 ']' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2006865 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2006865 ']' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2006865 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2006865 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2006865' 00:11:18.222 killing process with pid 2006865 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2006865 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2006865 00:11:18.222 nvmf threads initialize successfully 00:11:18.222 bdev subsystem init successfully 00:11:18.222 created a nvmf target service 00:11:18.222 create targets's poll groups done 00:11:18.222 all subsystems of target started 00:11:18.222 nvmf target is running 00:11:18.222 all subsystems of target stopped 00:11:18.222 destroy targets's poll groups done 00:11:18.222 destroyed the nvmf target service 00:11:18.222 bdev subsystem finish successfully 00:11:18.222 nvmf threads destroy successfully 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.222 18:23:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 00:11:18.791 real 0m21.383s 00:11:18.791 user 0m46.645s 00:11:18.791 sys 0m6.914s 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.791 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.791 ************************************ 00:11:18.791 END TEST nvmf_example 00:11:18.791 ************************************ 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.792 ************************************ 00:11:18.792 START TEST nvmf_filesystem 00:11:18.792 ************************************ 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:18.792 * Looking for test storage... 00:11:18.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.792 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.055 --rc genhtml_branch_coverage=1 00:11:19.055 --rc genhtml_function_coverage=1 00:11:19.055 --rc genhtml_legend=1 00:11:19.055 --rc geninfo_all_blocks=1 00:11:19.055 --rc geninfo_unexecuted_blocks=1 00:11:19.055 00:11:19.055 ' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.055 --rc genhtml_branch_coverage=1 00:11:19.055 --rc genhtml_function_coverage=1 00:11:19.055 --rc genhtml_legend=1 00:11:19.055 --rc geninfo_all_blocks=1 00:11:19.055 --rc geninfo_unexecuted_blocks=1 00:11:19.055 00:11:19.055 ' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.055 --rc genhtml_branch_coverage=1 00:11:19.055 --rc genhtml_function_coverage=1 00:11:19.055 --rc genhtml_legend=1 00:11:19.055 --rc geninfo_all_blocks=1 00:11:19.055 --rc geninfo_unexecuted_blocks=1 00:11:19.055 00:11:19.055 ' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.055 --rc genhtml_branch_coverage=1 00:11:19.055 --rc genhtml_function_coverage=1 00:11:19.055 --rc genhtml_legend=1 00:11:19.055 --rc geninfo_all_blocks=1 00:11:19.055 --rc geninfo_unexecuted_blocks=1 00:11:19.055 00:11:19.055 ' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:19.055 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:19.056 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:19.056 #define SPDK_CONFIG_H 00:11:19.056 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:19.056 #define SPDK_CONFIG_APPS 1 00:11:19.056 #define SPDK_CONFIG_ARCH native 00:11:19.056 #undef SPDK_CONFIG_ASAN 00:11:19.056 #undef SPDK_CONFIG_AVAHI 00:11:19.056 #undef SPDK_CONFIG_CET 00:11:19.056 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:19.056 #define SPDK_CONFIG_COVERAGE 1 00:11:19.056 #define SPDK_CONFIG_CROSS_PREFIX 00:11:19.056 #undef SPDK_CONFIG_CRYPTO 00:11:19.056 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:19.056 #undef SPDK_CONFIG_CUSTOMOCF 00:11:19.056 #undef SPDK_CONFIG_DAOS 00:11:19.056 #define SPDK_CONFIG_DAOS_DIR 00:11:19.056 #define SPDK_CONFIG_DEBUG 1 00:11:19.056 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:19.056 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:19.056 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:19.056 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:19.056 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:19.056 #undef SPDK_CONFIG_DPDK_UADK 00:11:19.056 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:19.056 #define SPDK_CONFIG_EXAMPLES 1 00:11:19.056 #undef SPDK_CONFIG_FC 00:11:19.056 #define SPDK_CONFIG_FC_PATH 00:11:19.056 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:19.056 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:19.056 #define SPDK_CONFIG_FSDEV 1 00:11:19.056 #undef SPDK_CONFIG_FUSE 00:11:19.056 #undef SPDK_CONFIG_FUZZER 00:11:19.056 #define SPDK_CONFIG_FUZZER_LIB 00:11:19.056 #undef SPDK_CONFIG_GOLANG 00:11:19.056 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:19.056 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:19.056 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:19.056 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:19.056 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:19.056 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:19.056 #undef SPDK_CONFIG_HAVE_LZ4 00:11:19.056 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:19.056 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:19.056 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:19.056 #define SPDK_CONFIG_IDXD 1 00:11:19.056 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:19.056 #undef SPDK_CONFIG_IPSEC_MB 00:11:19.056 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:19.056 #define SPDK_CONFIG_ISAL 1 00:11:19.056 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:19.056 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:19.056 #define SPDK_CONFIG_LIBDIR 00:11:19.056 #undef SPDK_CONFIG_LTO 00:11:19.056 #define SPDK_CONFIG_MAX_LCORES 128 00:11:19.056 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:19.056 #define SPDK_CONFIG_NVME_CUSE 1 00:11:19.056 #undef SPDK_CONFIG_OCF 00:11:19.056 #define SPDK_CONFIG_OCF_PATH 00:11:19.056 #define SPDK_CONFIG_OPENSSL_PATH 00:11:19.056 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:19.056 #define SPDK_CONFIG_PGO_DIR 00:11:19.056 #undef SPDK_CONFIG_PGO_USE 00:11:19.056 #define SPDK_CONFIG_PREFIX /usr/local 00:11:19.056 #undef SPDK_CONFIG_RAID5F 00:11:19.056 #undef SPDK_CONFIG_RBD 00:11:19.056 #define SPDK_CONFIG_RDMA 1 00:11:19.056 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:19.056 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:19.056 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:19.056 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:19.056 #define SPDK_CONFIG_SHARED 1 00:11:19.056 #undef SPDK_CONFIG_SMA 00:11:19.057 #define SPDK_CONFIG_TESTS 1 00:11:19.057 #undef SPDK_CONFIG_TSAN 00:11:19.057 #define SPDK_CONFIG_UBLK 1 00:11:19.057 #define SPDK_CONFIG_UBSAN 1 00:11:19.057 #undef SPDK_CONFIG_UNIT_TESTS 00:11:19.057 #undef SPDK_CONFIG_URING 00:11:19.057 #define SPDK_CONFIG_URING_PATH 00:11:19.057 #undef SPDK_CONFIG_URING_ZNS 00:11:19.057 #undef SPDK_CONFIG_USDT 00:11:19.057 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:19.057 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:19.057 #define SPDK_CONFIG_VFIO_USER 1 00:11:19.057 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:19.057 #define SPDK_CONFIG_VHOST 1 00:11:19.057 #define SPDK_CONFIG_VIRTIO 1 00:11:19.057 #undef SPDK_CONFIG_VTUNE 00:11:19.057 #define SPDK_CONFIG_VTUNE_DIR 00:11:19.057 #define SPDK_CONFIG_WERROR 1 00:11:19.057 #define SPDK_CONFIG_WPDK_DIR 00:11:19.057 #undef SPDK_CONFIG_XNVME 00:11:19.057 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:19.057 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:19.058 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2009663 ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2009663 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xoIA2K 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xoIA2K/tests/target /tmp/spdk.xoIA2K 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:19.059 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123278557184 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356521472 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6077964288 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668229632 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847951360 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23355392 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64678014976 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=245760 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:19.060 * Looking for test storage... 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123278557184 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8292556800 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:19.060 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.321 --rc genhtml_branch_coverage=1 00:11:19.321 --rc genhtml_function_coverage=1 00:11:19.321 --rc genhtml_legend=1 00:11:19.321 --rc geninfo_all_blocks=1 00:11:19.321 --rc geninfo_unexecuted_blocks=1 00:11:19.321 00:11:19.321 ' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.321 --rc genhtml_branch_coverage=1 00:11:19.321 --rc genhtml_function_coverage=1 00:11:19.321 --rc genhtml_legend=1 00:11:19.321 --rc geninfo_all_blocks=1 00:11:19.321 --rc geninfo_unexecuted_blocks=1 00:11:19.321 00:11:19.321 ' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.321 --rc genhtml_branch_coverage=1 00:11:19.321 --rc genhtml_function_coverage=1 00:11:19.321 --rc genhtml_legend=1 00:11:19.321 --rc geninfo_all_blocks=1 00:11:19.321 --rc geninfo_unexecuted_blocks=1 00:11:19.321 00:11:19.321 ' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:19.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.321 --rc genhtml_branch_coverage=1 00:11:19.321 --rc genhtml_function_coverage=1 00:11:19.321 --rc genhtml_legend=1 00:11:19.321 --rc geninfo_all_blocks=1 00:11:19.321 --rc geninfo_unexecuted_blocks=1 00:11:19.321 00:11:19.321 ' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.321 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.322 18:23:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:27.462 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:27.462 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.462 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:27.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:27.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:27.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:11:27.463 00:11:27.463 --- 10.0.0.2 ping statistics --- 00:11:27.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.463 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:11:27.463 00:11:27.463 --- 10.0.0.1 ping statistics --- 00:11:27.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.463 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.463 ************************************ 00:11:27.463 START TEST nvmf_filesystem_no_in_capsule 00:11:27.463 ************************************ 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2013309 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2013309 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2013309 ']' 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.463 18:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.463 [2024-12-06 18:23:21.600320] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:11:27.463 [2024-12-06 18:23:21.600386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.463 [2024-12-06 18:23:21.699305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.463 [2024-12-06 18:23:21.752313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.463 [2024-12-06 18:23:21.752370] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.463 [2024-12-06 18:23:21.752381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.463 [2024-12-06 18:23:21.752388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.463 [2024-12-06 18:23:21.752395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.463 [2024-12-06 18:23:21.754449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.463 [2024-12-06 18:23:21.754609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.463 [2024-12-06 18:23:21.754770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.463 [2024-12-06 18:23:21.754907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.724 [2024-12-06 18:23:22.484389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:27.724 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.725 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.985 Malloc1 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.985 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.986 [2024-12-06 18:23:22.647827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:27.986 { 00:11:27.986 "name": "Malloc1", 00:11:27.986 "aliases": [ 00:11:27.986 "01c7f4a5-90ea-4700-854e-1ea5f26aa3e4" 00:11:27.986 ], 00:11:27.986 "product_name": "Malloc disk", 00:11:27.986 "block_size": 512, 00:11:27.986 "num_blocks": 1048576, 00:11:27.986 "uuid": "01c7f4a5-90ea-4700-854e-1ea5f26aa3e4", 00:11:27.986 "assigned_rate_limits": { 00:11:27.986 "rw_ios_per_sec": 0, 00:11:27.986 "rw_mbytes_per_sec": 0, 00:11:27.986 "r_mbytes_per_sec": 0, 00:11:27.986 "w_mbytes_per_sec": 0 00:11:27.986 }, 00:11:27.986 "claimed": true, 00:11:27.986 "claim_type": "exclusive_write", 00:11:27.986 "zoned": false, 00:11:27.986 "supported_io_types": { 00:11:27.986 "read": true, 00:11:27.986 "write": true, 00:11:27.986 "unmap": true, 00:11:27.986 "flush": true, 00:11:27.986 "reset": true, 00:11:27.986 "nvme_admin": false, 00:11:27.986 "nvme_io": false, 00:11:27.986 "nvme_io_md": false, 00:11:27.986 "write_zeroes": true, 00:11:27.986 "zcopy": true, 00:11:27.986 "get_zone_info": false, 00:11:27.986 "zone_management": false, 00:11:27.986 "zone_append": false, 00:11:27.986 "compare": false, 00:11:27.986 "compare_and_write": false, 00:11:27.986 "abort": true, 00:11:27.986 "seek_hole": false, 00:11:27.986 "seek_data": false, 00:11:27.986 "copy": true, 00:11:27.986 "nvme_iov_md": false 00:11:27.986 }, 00:11:27.986 "memory_domains": [ 00:11:27.986 { 00:11:27.986 "dma_device_id": "system", 00:11:27.986 "dma_device_type": 1 00:11:27.986 }, 00:11:27.986 { 00:11:27.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.986 "dma_device_type": 2 00:11:27.986 } 00:11:27.986 ], 00:11:27.986 "driver_specific": {} 00:11:27.986 } 00:11:27.986 ]' 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:27.986 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:28.247 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:28.247 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:28.247 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:28.248 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:28.248 18:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.633 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.633 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.633 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.633 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:29.633 18:23:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.175 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.175 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.175 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.175 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.176 18:23:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.436 18:23:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:33.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:33.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:33.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.375 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.635 ************************************ 00:11:33.635 START TEST filesystem_ext4 00:11:33.635 ************************************ 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:33.635 18:23:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:33.635 mke2fs 1.47.0 (5-Feb-2023) 00:11:33.635 Discarding device blocks: 0/522240 done 00:11:33.635 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:33.635 Filesystem UUID: e5d5c01b-2402-4db2-96bc-08cb18638ab3 00:11:33.635 Superblock backups stored on blocks: 00:11:33.635 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:33.635 00:11:33.635 Allocating group tables: 0/64 done 00:11:33.635 Writing inode tables: 0/64 done 00:11:35.018 Creating journal (8192 blocks): done 00:11:36.967 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:36.967 00:11:36.967 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:36.967 18:23:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2013309 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.547 00:11:43.547 real 0m9.219s 00:11:43.547 user 0m0.033s 00:11:43.547 sys 0m0.075s 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.547 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.548 ************************************ 00:11:43.548 END TEST filesystem_ext4 00:11:43.548 ************************************ 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.548 ************************************ 00:11:43.548 START TEST filesystem_btrfs 00:11:43.548 ************************************ 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.548 btrfs-progs v6.8.1 00:11:43.548 See https://btrfs.readthedocs.io for more information. 00:11:43.548 00:11:43.548 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.548 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.548 this does not affect your deployments: 00:11:43.548 - DUP for metadata (-m dup) 00:11:43.548 - enabled no-holes (-O no-holes) 00:11:43.548 - enabled free-space-tree (-R free-space-tree) 00:11:43.548 00:11:43.548 Label: (null) 00:11:43.548 UUID: 82224a47-fa24-4df8-b884-d01967ef3d1e 00:11:43.548 Node size: 16384 00:11:43.548 Sector size: 4096 (CPU page size: 4096) 00:11:43.548 Filesystem size: 510.00MiB 00:11:43.548 Block group profiles: 00:11:43.548 Data: single 8.00MiB 00:11:43.548 Metadata: DUP 32.00MiB 00:11:43.548 System: DUP 8.00MiB 00:11:43.548 SSD detected: yes 00:11:43.548 Zoned device: no 00:11:43.548 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.548 Checksum: crc32c 00:11:43.548 Number of devices: 1 00:11:43.548 Devices: 00:11:43.548 ID SIZE PATH 00:11:43.548 1 510.00MiB /dev/nvme0n1p1 00:11:43.548 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:43.548 18:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2013309 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.119 00:11:44.119 real 0m1.358s 00:11:44.119 user 0m0.034s 00:11:44.119 sys 0m0.114s 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.119 ************************************ 00:11:44.119 END TEST filesystem_btrfs 00:11:44.119 ************************************ 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.119 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.380 ************************************ 00:11:44.380 START TEST filesystem_xfs 00:11:44.380 ************************************ 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:44.380 18:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.380 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.380 = sectsz=512 attr=2, projid32bit=1 00:11:44.380 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.380 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.380 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.380 = sunit=0 swidth=0 blks 00:11:44.380 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.380 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.380 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.380 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:45.321 Discarding blocks...Done. 00:11:45.321 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.321 18:23:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2013309 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.866 00:11:47.866 real 0m3.663s 00:11:47.866 user 0m0.033s 00:11:47.866 sys 0m0.073s 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.866 ************************************ 00:11:47.866 END TEST filesystem_xfs 00:11:47.866 ************************************ 00:11:47.866 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2013309 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2013309 ']' 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2013309 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013309 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.126 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013309' 00:11:48.126 killing process with pid 2013309 00:11:48.127 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2013309 00:11:48.127 18:23:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2013309 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.387 00:11:48.387 real 0m21.562s 00:11:48.387 user 1m25.216s 00:11:48.387 sys 0m1.519s 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.387 ************************************ 00:11:48.387 END TEST nvmf_filesystem_no_in_capsule 00:11:48.387 ************************************ 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.387 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.649 ************************************ 00:11:48.649 START TEST nvmf_filesystem_in_capsule 00:11:48.649 ************************************ 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2017891 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2017891 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2017891 ']' 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.649 18:23:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.649 [2024-12-06 18:23:43.251894] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:11:48.649 [2024-12-06 18:23:43.251952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.649 [2024-12-06 18:23:43.344926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.649 [2024-12-06 18:23:43.379393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.649 [2024-12-06 18:23:43.379424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.649 [2024-12-06 18:23:43.379430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.649 [2024-12-06 18:23:43.379435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.649 [2024-12-06 18:23:43.379439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.649 [2024-12-06 18:23:43.380761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.649 [2024-12-06 18:23:43.381013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.649 [2024-12-06 18:23:43.381171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.649 [2024-12-06 18:23:43.381171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.590 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 [2024-12-06 18:23:44.100831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 [2024-12-06 18:23:44.238045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.591 { 00:11:49.591 "name": "Malloc1", 00:11:49.591 "aliases": [ 00:11:49.591 "35f73805-e866-4c6b-a565-94ff0023f2f5" 00:11:49.591 ], 00:11:49.591 "product_name": "Malloc disk", 00:11:49.591 "block_size": 512, 00:11:49.591 "num_blocks": 1048576, 00:11:49.591 "uuid": "35f73805-e866-4c6b-a565-94ff0023f2f5", 00:11:49.591 "assigned_rate_limits": { 00:11:49.591 "rw_ios_per_sec": 0, 00:11:49.591 "rw_mbytes_per_sec": 0, 00:11:49.591 "r_mbytes_per_sec": 0, 00:11:49.591 "w_mbytes_per_sec": 0 00:11:49.591 }, 00:11:49.591 "claimed": true, 00:11:49.591 "claim_type": "exclusive_write", 00:11:49.591 "zoned": false, 00:11:49.591 "supported_io_types": { 00:11:49.591 "read": true, 00:11:49.591 "write": true, 00:11:49.591 "unmap": true, 00:11:49.591 "flush": true, 00:11:49.591 "reset": true, 00:11:49.591 "nvme_admin": false, 00:11:49.591 "nvme_io": false, 00:11:49.591 "nvme_io_md": false, 00:11:49.591 "write_zeroes": true, 00:11:49.591 "zcopy": true, 00:11:49.591 "get_zone_info": false, 00:11:49.591 "zone_management": false, 00:11:49.591 "zone_append": false, 00:11:49.591 "compare": false, 00:11:49.591 "compare_and_write": false, 00:11:49.591 "abort": true, 00:11:49.591 "seek_hole": false, 00:11:49.591 "seek_data": false, 00:11:49.591 "copy": true, 00:11:49.591 "nvme_iov_md": false 00:11:49.591 }, 00:11:49.591 "memory_domains": [ 00:11:49.591 { 00:11:49.591 "dma_device_id": "system", 00:11:49.591 "dma_device_type": 1 00:11:49.591 }, 00:11:49.591 { 00:11:49.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.591 "dma_device_type": 2 00:11:49.591 } 00:11:49.591 ], 00:11:49.591 "driver_specific": {} 00:11:49.591 } 00:11:49.591 ]' 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.591 18:23:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.507 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.507 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:51.507 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.507 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:51.507 18:23:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.422 18:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.422 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.993 18:23:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.936 ************************************ 00:11:54.936 START TEST filesystem_in_capsule_ext4 00:11:54.936 ************************************ 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.936 18:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.936 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.936 Discarding device blocks: 0/522240 done 00:11:54.936 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.936 Filesystem UUID: 1f34dd62-1fb0-45af-b02e-3a5344332e49 00:11:54.936 Superblock backups stored on blocks: 00:11:54.936 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.936 00:11:54.936 Allocating group tables: 0/64 done 00:11:54.936 Writing inode tables: 0/64 done 00:11:55.197 Creating journal (8192 blocks): done 00:11:56.580 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.580 00:11:56.580 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:56.580 18:23:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2017891 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.157 00:12:03.157 real 0m7.243s 00:12:03.157 user 0m0.028s 00:12:03.157 sys 0m0.078s 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 END TEST filesystem_in_capsule_ext4 00:12:03.157 ************************************ 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 START TEST filesystem_in_capsule_btrfs 00:12:03.157 ************************************ 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.157 18:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:03.157 btrfs-progs v6.8.1 00:12:03.157 See https://btrfs.readthedocs.io for more information. 00:12:03.157 00:12:03.157 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:03.157 NOTE: several default settings have changed in version 5.15, please make sure 00:12:03.157 this does not affect your deployments: 00:12:03.157 - DUP for metadata (-m dup) 00:12:03.157 - enabled no-holes (-O no-holes) 00:12:03.157 - enabled free-space-tree (-R free-space-tree) 00:12:03.157 00:12:03.157 Label: (null) 00:12:03.157 UUID: 363faa2d-9223-4311-8d65-8091f3f977d3 00:12:03.157 Node size: 16384 00:12:03.157 Sector size: 4096 (CPU page size: 4096) 00:12:03.157 Filesystem size: 510.00MiB 00:12:03.157 Block group profiles: 00:12:03.157 Data: single 8.00MiB 00:12:03.157 Metadata: DUP 32.00MiB 00:12:03.157 System: DUP 8.00MiB 00:12:03.157 SSD detected: yes 00:12:03.157 Zoned device: no 00:12:03.157 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:03.157 Checksum: crc32c 00:12:03.157 Number of devices: 1 00:12:03.157 Devices: 00:12:03.157 ID SIZE PATH 00:12:03.157 1 510.00MiB /dev/nvme0n1p1 00:12:03.157 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2017891 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:03.157 00:12:03.157 real 0m0.637s 00:12:03.157 user 0m0.029s 00:12:03.157 sys 0m0.118s 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 END TEST filesystem_in_capsule_btrfs 00:12:03.157 ************************************ 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.157 ************************************ 00:12:03.157 START TEST filesystem_in_capsule_xfs 00:12:03.157 ************************************ 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:03.157 18:23:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:03.157 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:03.157 = sectsz=512 attr=2, projid32bit=1 00:12:03.157 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:03.157 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:03.158 data = bsize=4096 blocks=130560, imaxpct=25 00:12:03.158 = sunit=0 swidth=0 blks 00:12:03.158 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:03.158 log =internal log bsize=4096 blocks=16384, version=2 00:12:03.158 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:03.158 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:04.097 Discarding blocks...Done. 00:12:04.097 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:04.097 18:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2017891 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.638 00:12:06.638 real 0m3.754s 00:12:06.638 user 0m0.036s 00:12:06.638 sys 0m0.072s 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.638 ************************************ 00:12:06.638 END TEST filesystem_in_capsule_xfs 00:12:06.638 ************************************ 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.638 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2017891 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2017891 ']' 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2017891 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2017891 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2017891' 00:12:06.899 killing process with pid 2017891 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2017891 00:12:06.899 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2017891 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.160 00:12:07.160 real 0m18.649s 00:12:07.160 user 1m13.722s 00:12:07.160 sys 0m1.425s 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.160 ************************************ 00:12:07.160 END TEST nvmf_filesystem_in_capsule 00:12:07.160 ************************************ 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.160 rmmod nvme_tcp 00:12:07.160 rmmod nvme_fabrics 00:12:07.160 rmmod nvme_keyring 00:12:07.160 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.421 18:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:09.527 00:12:09.527 real 0m50.589s 00:12:09.527 user 2m41.357s 00:12:09.527 sys 0m8.864s 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:09.527 ************************************ 00:12:09.527 END TEST nvmf_filesystem 00:12:09.527 ************************************ 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.527 ************************************ 00:12:09.527 START TEST nvmf_target_discovery 00:12:09.527 ************************************ 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:09.527 * Looking for test storage... 00:12:09.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:09.527 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.528 --rc genhtml_branch_coverage=1 00:12:09.528 --rc genhtml_function_coverage=1 00:12:09.528 --rc genhtml_legend=1 00:12:09.528 --rc geninfo_all_blocks=1 00:12:09.528 --rc geninfo_unexecuted_blocks=1 00:12:09.528 00:12:09.528 ' 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.528 --rc genhtml_branch_coverage=1 00:12:09.528 --rc genhtml_function_coverage=1 00:12:09.528 --rc genhtml_legend=1 00:12:09.528 --rc geninfo_all_blocks=1 00:12:09.528 --rc geninfo_unexecuted_blocks=1 00:12:09.528 00:12:09.528 ' 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.528 --rc genhtml_branch_coverage=1 00:12:09.528 --rc genhtml_function_coverage=1 00:12:09.528 --rc genhtml_legend=1 00:12:09.528 --rc geninfo_all_blocks=1 00:12:09.528 --rc geninfo_unexecuted_blocks=1 00:12:09.528 00:12:09.528 ' 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:09.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.528 --rc genhtml_branch_coverage=1 00:12:09.528 --rc genhtml_function_coverage=1 00:12:09.528 --rc genhtml_legend=1 00:12:09.528 --rc geninfo_all_blocks=1 00:12:09.528 --rc geninfo_unexecuted_blocks=1 00:12:09.528 00:12:09.528 ' 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.528 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:09.790 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:09.791 18:24:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:17.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:17.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.938 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:17.939 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:17.939 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:17.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:12:17.939 00:12:17.939 --- 10.0.0.2 ping statistics --- 00:12:17.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.939 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:17.939 00:12:17.939 --- 10.0.0.1 ping statistics --- 00:12:17.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.939 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2026393 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2026393 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2026393 ']' 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.939 18:24:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.939 [2024-12-06 18:24:11.982517] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:12:17.939 [2024-12-06 18:24:11.982589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.939 [2024-12-06 18:24:12.082004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.939 [2024-12-06 18:24:12.135099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.939 [2024-12-06 18:24:12.135155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.939 [2024-12-06 18:24:12.135163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.939 [2024-12-06 18:24:12.135170] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.939 [2024-12-06 18:24:12.135177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.939 [2024-12-06 18:24:12.137534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.939 [2024-12-06 18:24:12.137701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.939 [2024-12-06 18:24:12.137864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.939 [2024-12-06 18:24:12.137866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 [2024-12-06 18:24:12.860006] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 Null1 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.200 [2024-12-06 18:24:12.930872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.200 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.201 Null2 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.201 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 Null3 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 Null4 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.463 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:18.463 00:12:18.463 Discovery Log Number of Records 6, Generation counter 6 00:12:18.463 =====Discovery Log Entry 0====== 00:12:18.463 trtype: tcp 00:12:18.463 adrfam: ipv4 00:12:18.463 subtype: current discovery subsystem 00:12:18.463 treq: not required 00:12:18.463 portid: 0 00:12:18.463 trsvcid: 4420 00:12:18.463 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:18.463 traddr: 10.0.0.2 00:12:18.463 eflags: explicit discovery connections, duplicate discovery information 00:12:18.463 sectype: none 00:12:18.463 =====Discovery Log Entry 1====== 00:12:18.463 trtype: tcp 00:12:18.463 adrfam: ipv4 00:12:18.463 subtype: nvme subsystem 00:12:18.463 treq: not required 00:12:18.463 portid: 0 00:12:18.463 trsvcid: 4420 00:12:18.464 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:18.464 traddr: 10.0.0.2 00:12:18.464 eflags: none 00:12:18.464 sectype: none 00:12:18.464 =====Discovery Log Entry 2====== 00:12:18.464 trtype: tcp 00:12:18.464 adrfam: ipv4 00:12:18.464 subtype: nvme subsystem 00:12:18.464 treq: not required 00:12:18.464 portid: 0 00:12:18.464 trsvcid: 4420 00:12:18.464 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:18.464 traddr: 10.0.0.2 00:12:18.464 eflags: none 00:12:18.464 sectype: none 00:12:18.464 =====Discovery Log Entry 3====== 00:12:18.464 trtype: tcp 00:12:18.464 adrfam: ipv4 00:12:18.464 subtype: nvme subsystem 00:12:18.464 treq: not required 00:12:18.464 portid: 0 00:12:18.464 trsvcid: 4420 00:12:18.464 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:18.464 traddr: 10.0.0.2 00:12:18.464 eflags: none 00:12:18.464 sectype: none 00:12:18.464 =====Discovery Log Entry 4====== 00:12:18.464 trtype: tcp 00:12:18.464 adrfam: ipv4 00:12:18.464 subtype: nvme subsystem 00:12:18.464 treq: not required 00:12:18.464 portid: 0 00:12:18.464 trsvcid: 4420 00:12:18.464 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:18.464 traddr: 10.0.0.2 00:12:18.464 eflags: none 00:12:18.464 sectype: none 00:12:18.464 =====Discovery Log Entry 5====== 00:12:18.464 trtype: tcp 00:12:18.464 adrfam: ipv4 00:12:18.464 subtype: discovery subsystem referral 00:12:18.464 treq: not required 00:12:18.464 portid: 0 00:12:18.464 trsvcid: 4430 00:12:18.464 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:18.464 traddr: 10.0.0.2 00:12:18.464 eflags: none 00:12:18.464 sectype: none 00:12:18.464 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:18.464 Perform nvmf subsystem discovery via RPC 00:12:18.464 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:18.464 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.464 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.727 [ 00:12:18.727 { 00:12:18.727 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:18.727 "subtype": "Discovery", 00:12:18.727 "listen_addresses": [ 00:12:18.727 { 00:12:18.727 "trtype": "TCP", 00:12:18.727 "adrfam": "IPv4", 00:12:18.727 "traddr": "10.0.0.2", 00:12:18.727 "trsvcid": "4420" 00:12:18.727 } 00:12:18.727 ], 00:12:18.727 "allow_any_host": true, 00:12:18.727 "hosts": [] 00:12:18.727 }, 00:12:18.727 { 00:12:18.727 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.727 "subtype": "NVMe", 00:12:18.727 "listen_addresses": [ 00:12:18.727 { 00:12:18.727 "trtype": "TCP", 00:12:18.727 "adrfam": "IPv4", 00:12:18.727 "traddr": "10.0.0.2", 00:12:18.727 "trsvcid": "4420" 00:12:18.727 } 00:12:18.727 ], 00:12:18.727 "allow_any_host": true, 00:12:18.727 "hosts": [], 00:12:18.727 "serial_number": "SPDK00000000000001", 00:12:18.727 "model_number": "SPDK bdev Controller", 00:12:18.727 "max_namespaces": 32, 00:12:18.727 "min_cntlid": 1, 00:12:18.727 "max_cntlid": 65519, 00:12:18.727 "namespaces": [ 00:12:18.727 { 00:12:18.727 "nsid": 1, 00:12:18.727 "bdev_name": "Null1", 00:12:18.727 "name": "Null1", 00:12:18.727 "nguid": "49968D9CA5334F3C98D610F9C7E08C3B", 00:12:18.727 "uuid": "49968d9c-a533-4f3c-98d6-10f9c7e08c3b" 00:12:18.727 } 00:12:18.727 ] 00:12:18.727 }, 00:12:18.727 { 00:12:18.727 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:18.727 "subtype": "NVMe", 00:12:18.727 "listen_addresses": [ 00:12:18.727 { 00:12:18.727 "trtype": "TCP", 00:12:18.727 "adrfam": "IPv4", 00:12:18.727 "traddr": "10.0.0.2", 00:12:18.727 "trsvcid": "4420" 00:12:18.727 } 00:12:18.727 ], 00:12:18.727 "allow_any_host": true, 00:12:18.727 "hosts": [], 00:12:18.727 "serial_number": "SPDK00000000000002", 00:12:18.727 "model_number": "SPDK bdev Controller", 00:12:18.727 "max_namespaces": 32, 00:12:18.727 "min_cntlid": 1, 00:12:18.727 "max_cntlid": 65519, 00:12:18.727 "namespaces": [ 00:12:18.727 { 00:12:18.727 "nsid": 1, 00:12:18.727 "bdev_name": "Null2", 00:12:18.727 "name": "Null2", 00:12:18.727 "nguid": "311069F92A204C6FA83712323649A204", 00:12:18.727 "uuid": "311069f9-2a20-4c6f-a837-12323649a204" 00:12:18.727 } 00:12:18.727 ] 00:12:18.727 }, 00:12:18.727 { 00:12:18.727 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:18.727 "subtype": "NVMe", 00:12:18.727 "listen_addresses": [ 00:12:18.727 { 00:12:18.727 "trtype": "TCP", 00:12:18.727 "adrfam": "IPv4", 00:12:18.727 "traddr": "10.0.0.2", 00:12:18.727 "trsvcid": "4420" 00:12:18.727 } 00:12:18.727 ], 00:12:18.727 "allow_any_host": true, 00:12:18.727 "hosts": [], 00:12:18.727 "serial_number": "SPDK00000000000003", 00:12:18.727 "model_number": "SPDK bdev Controller", 00:12:18.727 "max_namespaces": 32, 00:12:18.727 "min_cntlid": 1, 00:12:18.727 "max_cntlid": 65519, 00:12:18.727 "namespaces": [ 00:12:18.727 { 00:12:18.727 "nsid": 1, 00:12:18.727 "bdev_name": "Null3", 00:12:18.727 "name": "Null3", 00:12:18.727 "nguid": "793ED4B4EBE9499C91820A377950D14B", 00:12:18.727 "uuid": "793ed4b4-ebe9-499c-9182-0a377950d14b" 00:12:18.727 } 00:12:18.727 ] 00:12:18.727 }, 00:12:18.727 { 00:12:18.727 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:18.727 "subtype": "NVMe", 00:12:18.727 "listen_addresses": [ 00:12:18.727 { 00:12:18.727 "trtype": "TCP", 00:12:18.727 "adrfam": "IPv4", 00:12:18.727 "traddr": "10.0.0.2", 00:12:18.727 "trsvcid": "4420" 00:12:18.727 } 00:12:18.727 ], 00:12:18.727 "allow_any_host": true, 00:12:18.727 "hosts": [], 00:12:18.727 "serial_number": "SPDK00000000000004", 00:12:18.727 "model_number": "SPDK bdev Controller", 00:12:18.727 "max_namespaces": 32, 00:12:18.727 "min_cntlid": 1, 00:12:18.727 "max_cntlid": 65519, 00:12:18.727 "namespaces": [ 00:12:18.727 { 00:12:18.727 "nsid": 1, 00:12:18.727 "bdev_name": "Null4", 00:12:18.727 "name": "Null4", 00:12:18.727 "nguid": "76AB26CCA81A43FBAF47A79D7BFEF272", 00:12:18.727 "uuid": "76ab26cc-a81a-43fb-af47-a79d7bfef272" 00:12:18.727 } 00:12:18.727 ] 00:12:18.727 } 00:12:18.727 ] 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:18.727 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.728 rmmod nvme_tcp 00:12:18.728 rmmod nvme_fabrics 00:12:18.728 rmmod nvme_keyring 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2026393 ']' 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2026393 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2026393 ']' 00:12:18.728 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2026393 00:12:18.990 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:18.990 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2026393 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2026393' 00:12:18.991 killing process with pid 2026393 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2026393 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2026393 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.991 18:24:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:21.542 00:12:21.542 real 0m11.717s 00:12:21.542 user 0m8.803s 00:12:21.542 sys 0m6.152s 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:21.542 ************************************ 00:12:21.542 END TEST nvmf_target_discovery 00:12:21.542 ************************************ 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.542 ************************************ 00:12:21.542 START TEST nvmf_referrals 00:12:21.542 ************************************ 00:12:21.542 18:24:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:21.542 * Looking for test storage... 00:12:21.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.542 --rc genhtml_branch_coverage=1 00:12:21.542 --rc genhtml_function_coverage=1 00:12:21.542 --rc genhtml_legend=1 00:12:21.542 --rc geninfo_all_blocks=1 00:12:21.542 --rc geninfo_unexecuted_blocks=1 00:12:21.542 00:12:21.542 ' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.542 --rc genhtml_branch_coverage=1 00:12:21.542 --rc genhtml_function_coverage=1 00:12:21.542 --rc genhtml_legend=1 00:12:21.542 --rc geninfo_all_blocks=1 00:12:21.542 --rc geninfo_unexecuted_blocks=1 00:12:21.542 00:12:21.542 ' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.542 --rc genhtml_branch_coverage=1 00:12:21.542 --rc genhtml_function_coverage=1 00:12:21.542 --rc genhtml_legend=1 00:12:21.542 --rc geninfo_all_blocks=1 00:12:21.542 --rc geninfo_unexecuted_blocks=1 00:12:21.542 00:12:21.542 ' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:21.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.542 --rc genhtml_branch_coverage=1 00:12:21.542 --rc genhtml_function_coverage=1 00:12:21.542 --rc genhtml_legend=1 00:12:21.542 --rc geninfo_all_blocks=1 00:12:21.542 --rc geninfo_unexecuted_blocks=1 00:12:21.542 00:12:21.542 ' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:21.542 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:21.543 18:24:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:29.690 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:29.690 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:29.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:29.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:29.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:12:29.690 00:12:29.690 --- 10.0.0.2 ping statistics --- 00:12:29.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.690 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:12:29.690 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:29.690 00:12:29.690 --- 10.0.0.1 ping statistics --- 00:12:29.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.691 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2031078 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2031078 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2031078 ']' 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 [2024-12-06 18:24:23.740151] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:12:29.691 [2024-12-06 18:24:23.740218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.691 [2024-12-06 18:24:23.812372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.691 [2024-12-06 18:24:23.859124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.691 [2024-12-06 18:24:23.859172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.691 [2024-12-06 18:24:23.859184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.691 [2024-12-06 18:24:23.859190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.691 [2024-12-06 18:24:23.859194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.691 [2024-12-06 18:24:23.860955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.691 [2024-12-06 18:24:23.861213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.691 [2024-12-06 18:24:23.861381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.691 [2024-12-06 18:24:23.861383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.691 18:24:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 [2024-12-06 18:24:24.023937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 [2024-12-06 18:24:24.046916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.691 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.692 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:29.953 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.214 18:24:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.481 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:30.745 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.006 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.268 18:24:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.268 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:31.529 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.790 rmmod nvme_tcp 00:12:31.790 rmmod nvme_fabrics 00:12:31.790 rmmod nvme_keyring 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2031078 ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2031078 ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2031078' 00:12:31.790 killing process with pid 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2031078 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:31.790 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:32.051 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:32.051 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:32.052 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:32.052 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.052 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.052 18:24:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:33.967 00:12:33.967 real 0m12.740s 00:12:33.967 user 0m13.798s 00:12:33.967 sys 0m6.564s 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 ************************************ 00:12:33.967 END TEST nvmf_referrals 00:12:33.967 ************************************ 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.967 ************************************ 00:12:33.967 START TEST nvmf_connect_disconnect 00:12:33.967 ************************************ 00:12:33.967 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:34.229 * Looking for test storage... 00:12:34.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.229 --rc genhtml_branch_coverage=1 00:12:34.229 --rc genhtml_function_coverage=1 00:12:34.229 --rc genhtml_legend=1 00:12:34.229 --rc geninfo_all_blocks=1 00:12:34.229 --rc geninfo_unexecuted_blocks=1 00:12:34.229 00:12:34.229 ' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.229 --rc genhtml_branch_coverage=1 00:12:34.229 --rc genhtml_function_coverage=1 00:12:34.229 --rc genhtml_legend=1 00:12:34.229 --rc geninfo_all_blocks=1 00:12:34.229 --rc geninfo_unexecuted_blocks=1 00:12:34.229 00:12:34.229 ' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.229 --rc genhtml_branch_coverage=1 00:12:34.229 --rc genhtml_function_coverage=1 00:12:34.229 --rc genhtml_legend=1 00:12:34.229 --rc geninfo_all_blocks=1 00:12:34.229 --rc geninfo_unexecuted_blocks=1 00:12:34.229 00:12:34.229 ' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:34.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.229 --rc genhtml_branch_coverage=1 00:12:34.229 --rc genhtml_function_coverage=1 00:12:34.229 --rc genhtml_legend=1 00:12:34.229 --rc geninfo_all_blocks=1 00:12:34.229 --rc geninfo_unexecuted_blocks=1 00:12:34.229 00:12:34.229 ' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.229 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:34.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:34.230 18:24:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:42.397 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:42.397 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:42.397 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:42.398 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:42.398 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:42.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:12:42.398 00:12:42.398 --- 10.0.0.2 ping statistics --- 00:12:42.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.398 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:12:42.398 00:12:42.398 --- 10.0.0.1 ping statistics --- 00:12:42.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.398 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2035857 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2035857 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2035857 ']' 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.398 18:24:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.398 [2024-12-06 18:24:36.531427] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:12:42.398 [2024-12-06 18:24:36.531495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.398 [2024-12-06 18:24:36.631796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.398 [2024-12-06 18:24:36.685603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.398 [2024-12-06 18:24:36.685670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.398 [2024-12-06 18:24:36.685680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.398 [2024-12-06 18:24:36.685688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.398 [2024-12-06 18:24:36.685694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.398 [2024-12-06 18:24:36.687772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.398 [2024-12-06 18:24:36.687940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.398 [2024-12-06 18:24:36.688103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.398 [2024-12-06 18:24:36.688103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.660 [2024-12-06 18:24:37.406121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.660 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:42.921 [2024-12-06 18:24:37.482866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:42.921 18:24:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:47.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.235 rmmod nvme_tcp 00:13:01.235 rmmod nvme_fabrics 00:13:01.235 rmmod nvme_keyring 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2035857 ']' 00:13:01.235 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2035857 ']' 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035857' 00:13:01.236 killing process with pid 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2035857 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.236 18:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.786 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.786 00:13:03.786 real 0m29.239s 00:13:03.786 user 1m18.719s 00:13:03.786 sys 0m7.093s 00:13:03.786 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.786 18:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 ************************************ 00:13:03.786 END TEST nvmf_connect_disconnect 00:13:03.786 ************************************ 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.786 ************************************ 00:13:03.786 START TEST nvmf_multitarget 00:13:03.786 ************************************ 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:03.786 * Looking for test storage... 00:13:03.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:03.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.786 --rc genhtml_branch_coverage=1 00:13:03.786 --rc genhtml_function_coverage=1 00:13:03.786 --rc genhtml_legend=1 00:13:03.786 --rc geninfo_all_blocks=1 00:13:03.786 --rc geninfo_unexecuted_blocks=1 00:13:03.786 00:13:03.786 ' 00:13:03.786 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.787 18:24:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.935 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.935 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.936 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.936 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.936 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:11.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:13:11.936 00:13:11.936 --- 10.0.0.2 ping statistics --- 00:13:11.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.936 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:13:11.936 00:13:11.936 --- 10.0.0.1 ping statistics --- 00:13:11.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.936 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2043977 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2043977 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2043977 ']' 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.936 18:25:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 [2024-12-06 18:25:05.813863] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:13:11.936 [2024-12-06 18:25:05.813932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.936 [2024-12-06 18:25:05.912135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.936 [2024-12-06 18:25:05.964768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.936 [2024-12-06 18:25:05.964821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.936 [2024-12-06 18:25:05.964830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.936 [2024-12-06 18:25:05.964838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.936 [2024-12-06 18:25:05.964844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.936 [2024-12-06 18:25:05.966837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.936 [2024-12-06 18:25:05.966998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.936 [2024-12-06 18:25:05.967158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.936 [2024-12-06 18:25:05.967158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.936 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:11.937 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:12.199 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:12.199 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:12.199 "nvmf_tgt_1" 00:13:12.199 18:25:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:12.460 "nvmf_tgt_2" 00:13:12.460 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.460 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:12.460 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:12.460 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:12.460 true 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:12.722 true 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:12.722 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:12.983 rmmod nvme_tcp 00:13:12.983 rmmod nvme_fabrics 00:13:12.983 rmmod nvme_keyring 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2043977 ']' 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2043977 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2043977 ']' 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2043977 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2043977 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2043977' 00:13:12.983 killing process with pid 2043977 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2043977 00:13:12.983 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2043977 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.246 18:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.164 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.165 00:13:15.165 real 0m11.819s 00:13:15.165 user 0m10.335s 00:13:15.165 sys 0m6.117s 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:15.165 ************************************ 00:13:15.165 END TEST nvmf_multitarget 00:13:15.165 ************************************ 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.165 18:25:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.441 ************************************ 00:13:15.441 START TEST nvmf_rpc 00:13:15.441 ************************************ 00:13:15.441 18:25:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:15.441 * Looking for test storage... 00:13:15.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.441 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.442 --rc genhtml_branch_coverage=1 00:13:15.442 --rc genhtml_function_coverage=1 00:13:15.442 --rc genhtml_legend=1 00:13:15.442 --rc geninfo_all_blocks=1 00:13:15.442 --rc geninfo_unexecuted_blocks=1 00:13:15.442 00:13:15.442 ' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.442 --rc genhtml_branch_coverage=1 00:13:15.442 --rc genhtml_function_coverage=1 00:13:15.442 --rc genhtml_legend=1 00:13:15.442 --rc geninfo_all_blocks=1 00:13:15.442 --rc geninfo_unexecuted_blocks=1 00:13:15.442 00:13:15.442 ' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.442 --rc genhtml_branch_coverage=1 00:13:15.442 --rc genhtml_function_coverage=1 00:13:15.442 --rc genhtml_legend=1 00:13:15.442 --rc geninfo_all_blocks=1 00:13:15.442 --rc geninfo_unexecuted_blocks=1 00:13:15.442 00:13:15.442 ' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.442 --rc genhtml_branch_coverage=1 00:13:15.442 --rc genhtml_function_coverage=1 00:13:15.442 --rc genhtml_legend=1 00:13:15.442 --rc geninfo_all_blocks=1 00:13:15.442 --rc geninfo_unexecuted_blocks=1 00:13:15.442 00:13:15.442 ' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.442 18:25:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:23.609 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.609 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:23.610 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:23.610 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:23.610 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:13:23.610 00:13:23.610 --- 10.0.0.2 ping statistics --- 00:13:23.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.610 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:13:23.610 00:13:23.610 --- 10.0.0.1 ping statistics --- 00:13:23.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.610 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2048616 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2048616 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2048616 ']' 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:23.610 18:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.610 [2024-12-06 18:25:17.841701] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:13:23.610 [2024-12-06 18:25:17.841767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.610 [2024-12-06 18:25:17.942439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.610 [2024-12-06 18:25:17.995226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.610 [2024-12-06 18:25:17.995287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.610 [2024-12-06 18:25:17.995296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.610 [2024-12-06 18:25:17.995304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.610 [2024-12-06 18:25:17.995310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.610 [2024-12-06 18:25:17.997374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.610 [2024-12-06 18:25:17.997539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.610 [2024-12-06 18:25:17.997700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.610 [2024-12-06 18:25:17.997702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:24.274 "tick_rate": 2400000000, 00:13:24.274 "poll_groups": [ 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_000", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_001", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_002", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_003", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [] 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 }' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.274 [2024-12-06 18:25:18.835397] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:24.274 "tick_rate": 2400000000, 00:13:24.274 "poll_groups": [ 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_000", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [ 00:13:24.274 { 00:13:24.274 "trtype": "TCP" 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_001", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [ 00:13:24.274 { 00:13:24.274 "trtype": "TCP" 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_002", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [ 00:13:24.274 { 00:13:24.274 "trtype": "TCP" 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 }, 00:13:24.274 { 00:13:24.274 "name": "nvmf_tgt_poll_group_003", 00:13:24.274 "admin_qpairs": 0, 00:13:24.274 "io_qpairs": 0, 00:13:24.274 "current_admin_qpairs": 0, 00:13:24.274 "current_io_qpairs": 0, 00:13:24.274 "pending_bdev_io": 0, 00:13:24.274 "completed_nvme_io": 0, 00:13:24.274 "transports": [ 00:13:24.274 { 00:13:24.274 "trtype": "TCP" 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 } 00:13:24.274 ] 00:13:24.274 }' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.274 18:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 Malloc1 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.573 [2024-12-06 18:25:19.048693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:24.573 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:24.574 [2024-12-06 18:25:19.085670] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:24.574 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:24.574 could not add new controller: failed to write to nvme-fabrics device 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.574 18:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.094 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.094 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:26.094 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.094 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:26.094 18:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.009 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.270 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.270 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:28.271 [2024-12-06 18:25:22.872207] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:28.271 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:28.271 could not add new controller: failed to write to nvme-fabrics device 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.271 18:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.185 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.185 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.185 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.185 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:30.185 18:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 [2024-12-06 18:25:26.640676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.099 18:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.486 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:33.486 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:33.486 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.486 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:33.486 18:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:35.396 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.656 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.656 [2024-12-06 18:25:30.365297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.657 18:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:37.564 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.564 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.564 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.564 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:37.564 18:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.477 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.477 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.477 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.477 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:39.477 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.478 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:39.478 18:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 [2024-12-06 18:25:34.122171] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.478 18:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:40.866 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:40.866 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.866 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.866 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:40.866 18:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 [2024-12-06 18:25:37.841937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.437 18:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:44.825 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:44.825 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:44.825 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.825 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:44.825 18:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:46.738 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 [2024-12-06 18:25:41.597865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.000 18:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.387 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.387 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:48.387 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.387 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:48.387 18:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [2024-12-06 18:25:45.366229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [2024-12-06 18:25:45.434407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [2024-12-06 18:25:45.502611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 [2024-12-06 18:25:45.574898] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.935 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 [2024-12-06 18:25:45.643121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.936 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.951 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:50.951 "tick_rate": 2400000000, 00:13:50.951 "poll_groups": [ 00:13:50.951 { 00:13:50.951 "name": "nvmf_tgt_poll_group_000", 00:13:50.951 "admin_qpairs": 0, 00:13:50.951 "io_qpairs": 224, 00:13:50.951 "current_admin_qpairs": 0, 00:13:50.951 "current_io_qpairs": 0, 00:13:50.951 "pending_bdev_io": 0, 00:13:50.951 "completed_nvme_io": 275, 00:13:50.951 "transports": [ 00:13:50.951 { 00:13:50.951 "trtype": "TCP" 00:13:50.951 } 00:13:50.951 ] 00:13:50.951 }, 00:13:50.951 { 00:13:50.951 "name": "nvmf_tgt_poll_group_001", 00:13:50.951 "admin_qpairs": 1, 00:13:50.951 "io_qpairs": 223, 00:13:50.951 "current_admin_qpairs": 0, 00:13:50.951 "current_io_qpairs": 0, 00:13:50.951 "pending_bdev_io": 0, 00:13:50.951 "completed_nvme_io": 518, 00:13:50.951 "transports": [ 00:13:50.951 { 00:13:50.951 "trtype": "TCP" 00:13:50.951 } 00:13:50.951 ] 00:13:50.951 }, 00:13:50.951 { 00:13:50.951 "name": "nvmf_tgt_poll_group_002", 00:13:50.952 "admin_qpairs": 6, 00:13:50.952 "io_qpairs": 218, 00:13:50.952 "current_admin_qpairs": 0, 00:13:50.952 "current_io_qpairs": 0, 00:13:50.952 "pending_bdev_io": 0, 00:13:50.952 "completed_nvme_io": 218, 00:13:50.952 "transports": [ 00:13:50.952 { 00:13:50.952 "trtype": "TCP" 00:13:50.952 } 00:13:50.952 ] 00:13:50.952 }, 00:13:50.952 { 00:13:50.952 "name": "nvmf_tgt_poll_group_003", 00:13:50.952 "admin_qpairs": 0, 00:13:50.952 "io_qpairs": 224, 00:13:50.952 "current_admin_qpairs": 0, 00:13:50.952 "current_io_qpairs": 0, 00:13:50.952 "pending_bdev_io": 0, 00:13:50.952 "completed_nvme_io": 228, 00:13:50.952 "transports": [ 00:13:50.952 { 00:13:50.952 "trtype": "TCP" 00:13:50.952 } 00:13:50.952 ] 00:13:50.952 } 00:13:50.952 ] 00:13:50.952 }' 00:13:50.952 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:50.952 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.212 rmmod nvme_tcp 00:13:51.212 rmmod nvme_fabrics 00:13:51.212 rmmod nvme_keyring 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2048616 ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2048616 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2048616 ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2048616 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2048616 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2048616' 00:13:51.212 killing process with pid 2048616 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2048616 00:13:51.212 18:25:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2048616 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.472 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.473 18:25:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.387 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.387 00:13:53.387 real 0m38.194s 00:13:53.387 user 1m54.327s 00:13:53.387 sys 0m7.936s 00:13:53.387 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.387 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.387 ************************************ 00:13:53.387 END TEST nvmf_rpc 00:13:53.387 ************************************ 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.648 ************************************ 00:13:53.648 START TEST nvmf_invalid 00:13:53.648 ************************************ 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:53.648 * Looking for test storage... 00:13:53.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.648 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:53.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.910 --rc genhtml_branch_coverage=1 00:13:53.910 --rc genhtml_function_coverage=1 00:13:53.910 --rc genhtml_legend=1 00:13:53.910 --rc geninfo_all_blocks=1 00:13:53.910 --rc geninfo_unexecuted_blocks=1 00:13:53.910 00:13:53.910 ' 00:13:53.910 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:53.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.910 --rc genhtml_branch_coverage=1 00:13:53.910 --rc genhtml_function_coverage=1 00:13:53.910 --rc genhtml_legend=1 00:13:53.910 --rc geninfo_all_blocks=1 00:13:53.910 --rc geninfo_unexecuted_blocks=1 00:13:53.910 00:13:53.910 ' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:53.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.911 --rc genhtml_branch_coverage=1 00:13:53.911 --rc genhtml_function_coverage=1 00:13:53.911 --rc genhtml_legend=1 00:13:53.911 --rc geninfo_all_blocks=1 00:13:53.911 --rc geninfo_unexecuted_blocks=1 00:13:53.911 00:13:53.911 ' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:53.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.911 --rc genhtml_branch_coverage=1 00:13:53.911 --rc genhtml_function_coverage=1 00:13:53.911 --rc genhtml_legend=1 00:13:53.911 --rc geninfo_all_blocks=1 00:13:53.911 --rc geninfo_unexecuted_blocks=1 00:13:53.911 00:13:53.911 ' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:53.911 18:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:02.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:02.055 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:02.055 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:02.055 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.055 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.056 18:25:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.701 ms 00:14:02.056 00:14:02.056 --- 10.0.0.2 ping statistics --- 00:14:02.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.056 rtt min/avg/max/mdev = 0.701/0.701/0.701/0.000 ms 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:14:02.056 00:14:02.056 --- 10.0.0.1 ping statistics --- 00:14:02.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.056 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2058347 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2058347 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2058347 ']' 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.056 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.056 [2024-12-06 18:25:56.135443] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:14:02.056 [2024-12-06 18:25:56.135520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.056 [2024-12-06 18:25:56.233503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.056 [2024-12-06 18:25:56.287216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.056 [2024-12-06 18:25:56.287271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.056 [2024-12-06 18:25:56.287280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.056 [2024-12-06 18:25:56.287286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.056 [2024-12-06 18:25:56.287292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.056 [2024-12-06 18:25:56.289355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.056 [2024-12-06 18:25:56.289516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.056 [2024-12-06 18:25:56.289656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.056 [2024-12-06 18:25:56.289664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.317 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.317 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:02.317 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:02.317 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:02.317 18:25:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:02.317 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.317 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:02.317 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25977 00:14:02.578 [2024-12-06 18:25:57.172514] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:02.578 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:02.578 { 00:14:02.578 "nqn": "nqn.2016-06.io.spdk:cnode25977", 00:14:02.578 "tgt_name": "foobar", 00:14:02.578 "method": "nvmf_create_subsystem", 00:14:02.578 "req_id": 1 00:14:02.578 } 00:14:02.578 Got JSON-RPC error response 00:14:02.578 response: 00:14:02.578 { 00:14:02.578 "code": -32603, 00:14:02.578 "message": "Unable to find target foobar" 00:14:02.578 }' 00:14:02.578 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:02.578 { 00:14:02.578 "nqn": "nqn.2016-06.io.spdk:cnode25977", 00:14:02.578 "tgt_name": "foobar", 00:14:02.578 "method": "nvmf_create_subsystem", 00:14:02.578 "req_id": 1 00:14:02.578 } 00:14:02.578 Got JSON-RPC error response 00:14:02.578 response: 00:14:02.578 { 00:14:02.578 "code": -32603, 00:14:02.578 "message": "Unable to find target foobar" 00:14:02.578 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:02.578 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:02.578 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11022 00:14:02.840 [2024-12-06 18:25:57.377345] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11022: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:02.840 { 00:14:02.840 "nqn": "nqn.2016-06.io.spdk:cnode11022", 00:14:02.840 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:02.840 "method": "nvmf_create_subsystem", 00:14:02.840 "req_id": 1 00:14:02.840 } 00:14:02.840 Got JSON-RPC error response 00:14:02.840 response: 00:14:02.840 { 00:14:02.840 "code": -32602, 00:14:02.840 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:02.840 }' 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:02.840 { 00:14:02.840 "nqn": "nqn.2016-06.io.spdk:cnode11022", 00:14:02.840 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:02.840 "method": "nvmf_create_subsystem", 00:14:02.840 "req_id": 1 00:14:02.840 } 00:14:02.840 Got JSON-RPC error response 00:14:02.840 response: 00:14:02.840 { 00:14:02.840 "code": -32602, 00:14:02.840 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:02.840 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32262 00:14:02.840 [2024-12-06 18:25:57.586103] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32262: invalid model number 'SPDK_Controller' 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:02.840 { 00:14:02.840 "nqn": "nqn.2016-06.io.spdk:cnode32262", 00:14:02.840 "model_number": "SPDK_Controller\u001f", 00:14:02.840 "method": "nvmf_create_subsystem", 00:14:02.840 "req_id": 1 00:14:02.840 } 00:14:02.840 Got JSON-RPC error response 00:14:02.840 response: 00:14:02.840 { 00:14:02.840 "code": -32602, 00:14:02.840 "message": "Invalid MN SPDK_Controller\u001f" 00:14:02.840 }' 00:14:02.840 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:02.840 { 00:14:02.840 "nqn": "nqn.2016-06.io.spdk:cnode32262", 00:14:02.840 "model_number": "SPDK_Controller\u001f", 00:14:02.840 "method": "nvmf_create_subsystem", 00:14:02.840 "req_id": 1 00:14:02.840 } 00:14:02.840 Got JSON-RPC error response 00:14:02.840 response: 00:14:02.840 { 00:14:02.840 "code": -32602, 00:14:02.840 "message": "Invalid MN SPDK_Controller\u001f" 00:14:02.840 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:03.102 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ _ == \- ]] 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '_h`%Fzm6M8_05KYhbt2he' 00:14:03.103 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '_h`%Fzm6M8_05KYhbt2he' nqn.2016-06.io.spdk:cnode21877 00:14:03.365 [2024-12-06 18:25:57.963518] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21877: invalid serial number '_h`%Fzm6M8_05KYhbt2he' 00:14:03.366 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:03.366 { 00:14:03.366 "nqn": "nqn.2016-06.io.spdk:cnode21877", 00:14:03.366 "serial_number": "_h`%Fzm6M8_05KYhbt2he", 00:14:03.366 "method": "nvmf_create_subsystem", 00:14:03.366 "req_id": 1 00:14:03.366 } 00:14:03.366 Got JSON-RPC error response 00:14:03.366 response: 00:14:03.366 { 00:14:03.366 "code": -32602, 00:14:03.366 "message": "Invalid SN _h`%Fzm6M8_05KYhbt2he" 00:14:03.366 }' 00:14:03.366 18:25:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:03.366 { 00:14:03.366 "nqn": "nqn.2016-06.io.spdk:cnode21877", 00:14:03.366 "serial_number": "_h`%Fzm6M8_05KYhbt2he", 00:14:03.366 "method": "nvmf_create_subsystem", 00:14:03.366 "req_id": 1 00:14:03.366 } 00:14:03.366 Got JSON-RPC error response 00:14:03.366 response: 00:14:03.366 { 00:14:03.366 "code": -32602, 00:14:03.366 "message": "Invalid SN _h`%Fzm6M8_05KYhbt2he" 00:14:03.366 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.366 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:03.629 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9HS+X'\''s8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY' 00:14:03.630 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9HS+X'\''s8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY' nqn.2016-06.io.spdk:cnode31111 00:14:03.891 [2024-12-06 18:25:58.509587] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31111: invalid model number '9HS+X's8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY' 00:14:03.891 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:03.891 { 00:14:03.891 "nqn": "nqn.2016-06.io.spdk:cnode31111", 00:14:03.891 "model_number": "9HS+X'\''s8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY", 00:14:03.891 "method": "nvmf_create_subsystem", 00:14:03.891 "req_id": 1 00:14:03.891 } 00:14:03.891 Got JSON-RPC error response 00:14:03.891 response: 00:14:03.891 { 00:14:03.891 "code": -32602, 00:14:03.891 "message": "Invalid MN 9HS+X'\''s8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY" 00:14:03.891 }' 00:14:03.891 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:03.891 { 00:14:03.891 "nqn": "nqn.2016-06.io.spdk:cnode31111", 00:14:03.891 "model_number": "9HS+X's8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY", 00:14:03.891 "method": "nvmf_create_subsystem", 00:14:03.891 "req_id": 1 00:14:03.891 } 00:14:03.891 Got JSON-RPC error response 00:14:03.891 response: 00:14:03.891 { 00:14:03.891 "code": -32602, 00:14:03.891 "message": "Invalid MN 9HS+X's8YHkqXu+WnT09bDZ9IPXm1bxn(dqIq$ZNY" 00:14:03.891 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:03.891 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:04.153 [2024-12-06 18:25:58.710393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.153 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:04.153 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:04.153 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:04.153 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:04.414 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:04.414 18:25:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:04.414 [2024-12-06 18:25:59.088989] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:04.414 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:04.414 { 00:14:04.414 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:04.414 "listen_address": { 00:14:04.414 "trtype": "tcp", 00:14:04.414 "traddr": "", 00:14:04.414 "trsvcid": "4421" 00:14:04.414 }, 00:14:04.414 "method": "nvmf_subsystem_remove_listener", 00:14:04.414 "req_id": 1 00:14:04.414 } 00:14:04.414 Got JSON-RPC error response 00:14:04.414 response: 00:14:04.414 { 00:14:04.414 "code": -32602, 00:14:04.414 "message": "Invalid parameters" 00:14:04.414 }' 00:14:04.414 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:04.414 { 00:14:04.414 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:04.414 "listen_address": { 00:14:04.414 "trtype": "tcp", 00:14:04.414 "traddr": "", 00:14:04.414 "trsvcid": "4421" 00:14:04.414 }, 00:14:04.414 "method": "nvmf_subsystem_remove_listener", 00:14:04.414 "req_id": 1 00:14:04.414 } 00:14:04.414 Got JSON-RPC error response 00:14:04.414 response: 00:14:04.414 { 00:14:04.414 "code": -32602, 00:14:04.414 "message": "Invalid parameters" 00:14:04.414 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:04.414 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26626 -i 0 00:14:04.675 [2024-12-06 18:25:59.277587] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26626: invalid cntlid range [0-65519] 00:14:04.675 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:04.675 { 00:14:04.675 "nqn": "nqn.2016-06.io.spdk:cnode26626", 00:14:04.675 "min_cntlid": 0, 00:14:04.675 "method": "nvmf_create_subsystem", 00:14:04.675 "req_id": 1 00:14:04.675 } 00:14:04.675 Got JSON-RPC error response 00:14:04.675 response: 00:14:04.675 { 00:14:04.675 "code": -32602, 00:14:04.675 "message": "Invalid cntlid range [0-65519]" 00:14:04.675 }' 00:14:04.675 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:04.675 { 00:14:04.675 "nqn": "nqn.2016-06.io.spdk:cnode26626", 00:14:04.675 "min_cntlid": 0, 00:14:04.675 "method": "nvmf_create_subsystem", 00:14:04.675 "req_id": 1 00:14:04.675 } 00:14:04.675 Got JSON-RPC error response 00:14:04.675 response: 00:14:04.675 { 00:14:04.675 "code": -32602, 00:14:04.675 "message": "Invalid cntlid range [0-65519]" 00:14:04.675 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:04.675 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20579 -i 65520 00:14:04.936 [2024-12-06 18:25:59.466233] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20579: invalid cntlid range [65520-65519] 00:14:04.936 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:04.936 { 00:14:04.936 "nqn": "nqn.2016-06.io.spdk:cnode20579", 00:14:04.936 "min_cntlid": 65520, 00:14:04.936 "method": "nvmf_create_subsystem", 00:14:04.936 "req_id": 1 00:14:04.936 } 00:14:04.936 Got JSON-RPC error response 00:14:04.936 response: 00:14:04.936 { 00:14:04.936 "code": -32602, 00:14:04.936 "message": "Invalid cntlid range [65520-65519]" 00:14:04.936 }' 00:14:04.936 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:04.936 { 00:14:04.936 "nqn": "nqn.2016-06.io.spdk:cnode20579", 00:14:04.936 "min_cntlid": 65520, 00:14:04.936 "method": "nvmf_create_subsystem", 00:14:04.936 "req_id": 1 00:14:04.936 } 00:14:04.936 Got JSON-RPC error response 00:14:04.936 response: 00:14:04.936 { 00:14:04.936 "code": -32602, 00:14:04.936 "message": "Invalid cntlid range [65520-65519]" 00:14:04.936 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:04.936 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32163 -I 0 00:14:04.936 [2024-12-06 18:25:59.658800] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32163: invalid cntlid range [1-0] 00:14:04.936 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:04.936 { 00:14:04.936 "nqn": "nqn.2016-06.io.spdk:cnode32163", 00:14:04.936 "max_cntlid": 0, 00:14:04.936 "method": "nvmf_create_subsystem", 00:14:04.936 "req_id": 1 00:14:04.937 } 00:14:04.937 Got JSON-RPC error response 00:14:04.937 response: 00:14:04.937 { 00:14:04.937 "code": -32602, 00:14:04.937 "message": "Invalid cntlid range [1-0]" 00:14:04.937 }' 00:14:04.937 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:04.937 { 00:14:04.937 "nqn": "nqn.2016-06.io.spdk:cnode32163", 00:14:04.937 "max_cntlid": 0, 00:14:04.937 "method": "nvmf_create_subsystem", 00:14:04.937 "req_id": 1 00:14:04.937 } 00:14:04.937 Got JSON-RPC error response 00:14:04.937 response: 00:14:04.937 { 00:14:04.937 "code": -32602, 00:14:04.937 "message": "Invalid cntlid range [1-0]" 00:14:04.937 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:04.937 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10585 -I 65520 00:14:05.197 [2024-12-06 18:25:59.847410] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10585: invalid cntlid range [1-65520] 00:14:05.197 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:05.197 { 00:14:05.197 "nqn": "nqn.2016-06.io.spdk:cnode10585", 00:14:05.197 "max_cntlid": 65520, 00:14:05.197 "method": "nvmf_create_subsystem", 00:14:05.197 "req_id": 1 00:14:05.197 } 00:14:05.197 Got JSON-RPC error response 00:14:05.197 response: 00:14:05.197 { 00:14:05.197 "code": -32602, 00:14:05.197 "message": "Invalid cntlid range [1-65520]" 00:14:05.197 }' 00:14:05.197 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:05.197 { 00:14:05.197 "nqn": "nqn.2016-06.io.spdk:cnode10585", 00:14:05.197 "max_cntlid": 65520, 00:14:05.197 "method": "nvmf_create_subsystem", 00:14:05.197 "req_id": 1 00:14:05.197 } 00:14:05.197 Got JSON-RPC error response 00:14:05.197 response: 00:14:05.197 { 00:14:05.197 "code": -32602, 00:14:05.197 "message": "Invalid cntlid range [1-65520]" 00:14:05.197 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.197 18:25:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32532 -i 6 -I 5 00:14:05.458 [2024-12-06 18:26:00.044054] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32532: invalid cntlid range [6-5] 00:14:05.458 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:05.458 { 00:14:05.458 "nqn": "nqn.2016-06.io.spdk:cnode32532", 00:14:05.458 "min_cntlid": 6, 00:14:05.458 "max_cntlid": 5, 00:14:05.458 "method": "nvmf_create_subsystem", 00:14:05.458 "req_id": 1 00:14:05.458 } 00:14:05.458 Got JSON-RPC error response 00:14:05.458 response: 00:14:05.458 { 00:14:05.458 "code": -32602, 00:14:05.458 "message": "Invalid cntlid range [6-5]" 00:14:05.458 }' 00:14:05.458 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:05.458 { 00:14:05.458 "nqn": "nqn.2016-06.io.spdk:cnode32532", 00:14:05.458 "min_cntlid": 6, 00:14:05.458 "max_cntlid": 5, 00:14:05.458 "method": "nvmf_create_subsystem", 00:14:05.458 "req_id": 1 00:14:05.458 } 00:14:05.458 Got JSON-RPC error response 00:14:05.458 response: 00:14:05.458 { 00:14:05.458 "code": -32602, 00:14:05.458 "message": "Invalid cntlid range [6-5]" 00:14:05.458 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:05.458 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:05.458 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:05.458 { 00:14:05.458 "name": "foobar", 00:14:05.458 "method": "nvmf_delete_target", 00:14:05.458 "req_id": 1 00:14:05.458 } 00:14:05.458 Got JSON-RPC error response 00:14:05.458 response: 00:14:05.458 { 00:14:05.458 "code": -32602, 00:14:05.458 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:05.458 }' 00:14:05.458 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:05.458 { 00:14:05.459 "name": "foobar", 00:14:05.459 "method": "nvmf_delete_target", 00:14:05.459 "req_id": 1 00:14:05.459 } 00:14:05.459 Got JSON-RPC error response 00:14:05.459 response: 00:14:05.459 { 00:14:05.459 "code": -32602, 00:14:05.459 "message": "The specified target doesn't exist, cannot delete it." 00:14:05.459 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:05.459 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:05.459 rmmod nvme_tcp 00:14:05.459 rmmod nvme_fabrics 00:14:05.459 rmmod nvme_keyring 00:14:05.719 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:05.719 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:05.719 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:05.719 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2058347 ']' 00:14:05.719 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2058347 ']' 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2058347' 00:14:05.720 killing process with pid 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2058347 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.720 18:26:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:08.265 00:14:08.265 real 0m14.276s 00:14:08.265 user 0m21.215s 00:14:08.265 sys 0m6.851s 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 ************************************ 00:14:08.265 END TEST nvmf_invalid 00:14:08.265 ************************************ 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.265 ************************************ 00:14:08.265 START TEST nvmf_connect_stress 00:14:08.265 ************************************ 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:08.265 * Looking for test storage... 00:14:08.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.265 --rc genhtml_branch_coverage=1 00:14:08.265 --rc genhtml_function_coverage=1 00:14:08.265 --rc genhtml_legend=1 00:14:08.265 --rc geninfo_all_blocks=1 00:14:08.265 --rc geninfo_unexecuted_blocks=1 00:14:08.265 00:14:08.265 ' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.265 --rc genhtml_branch_coverage=1 00:14:08.265 --rc genhtml_function_coverage=1 00:14:08.265 --rc genhtml_legend=1 00:14:08.265 --rc geninfo_all_blocks=1 00:14:08.265 --rc geninfo_unexecuted_blocks=1 00:14:08.265 00:14:08.265 ' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.265 --rc genhtml_branch_coverage=1 00:14:08.265 --rc genhtml_function_coverage=1 00:14:08.265 --rc genhtml_legend=1 00:14:08.265 --rc geninfo_all_blocks=1 00:14:08.265 --rc geninfo_unexecuted_blocks=1 00:14:08.265 00:14:08.265 ' 00:14:08.265 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:08.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.265 --rc genhtml_branch_coverage=1 00:14:08.266 --rc genhtml_function_coverage=1 00:14:08.266 --rc genhtml_legend=1 00:14:08.266 --rc geninfo_all_blocks=1 00:14:08.266 --rc geninfo_unexecuted_blocks=1 00:14:08.266 00:14:08.266 ' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:08.266 18:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:16.402 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:16.402 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.402 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:16.402 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:16.403 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:16.403 18:26:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:16.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:14:16.403 00:14:16.403 --- 10.0.0.2 ping statistics --- 00:14:16.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.403 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:14:16.403 00:14:16.403 --- 10.0.0.1 ping statistics --- 00:14:16.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.403 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2063593 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2063593 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2063593 ']' 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.403 18:26:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.403 [2024-12-06 18:26:10.416788] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:14:16.403 [2024-12-06 18:26:10.416856] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.403 [2024-12-06 18:26:10.517174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.403 [2024-12-06 18:26:10.569259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.403 [2024-12-06 18:26:10.569317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.403 [2024-12-06 18:26:10.569325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.403 [2024-12-06 18:26:10.569333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.403 [2024-12-06 18:26:10.569340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.403 [2024-12-06 18:26:10.571200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.403 [2024-12-06 18:26:10.571361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.403 [2024-12-06 18:26:10.571362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 [2024-12-06 18:26:11.300151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 [2024-12-06 18:26:11.321741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 NULL1 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2063747 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.663 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:16.664 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:16.923 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:16.923 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.923 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.923 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.183 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.183 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:17.183 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.183 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.183 18:26:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.443 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.443 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:17.443 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.443 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.443 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.703 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.703 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:17.703 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.703 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.703 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.274 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.274 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:18.274 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.274 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.274 18:26:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.534 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.534 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:18.534 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.534 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.534 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.795 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.795 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:18.795 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.795 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.795 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.056 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.057 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:19.057 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.057 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.057 18:26:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.317 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:19.317 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.317 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.317 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.888 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.888 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:19.888 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.888 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.888 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.148 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.148 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:20.148 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.148 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.148 18:26:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.408 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.408 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:20.408 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.408 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.408 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.668 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.668 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:20.668 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.668 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.668 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.928 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.928 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:20.928 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.928 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.928 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.567 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.567 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:21.567 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.567 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.567 18:26:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.878 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:21.878 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.878 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.878 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.170 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.170 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:22.170 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.170 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.170 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.430 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.430 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:22.430 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.430 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.430 18:26:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.691 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.691 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:22.691 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.691 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.691 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.952 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.952 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:22.952 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.952 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.952 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.214 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.214 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:23.214 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.214 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.214 18:26:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.785 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.785 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:23.785 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.785 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.785 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.045 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.045 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:24.045 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.045 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.045 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.307 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.307 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:24.307 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.307 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.307 18:26:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.568 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.568 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:24.568 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.568 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.568 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.829 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.829 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:24.829 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.829 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.829 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.399 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.399 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:25.399 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.399 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.400 18:26:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.660 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:25.660 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.660 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.660 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.920 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.920 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:25.920 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.920 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.920 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.181 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.181 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:26.181 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.181 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.181 18:26:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.442 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.442 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:26.442 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:26.442 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.442 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:26.724 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2063747 00:14:26.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2063747) - No such process 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2063747 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:26.988 rmmod nvme_tcp 00:14:26.988 rmmod nvme_fabrics 00:14:26.988 rmmod nvme_keyring 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2063593 ']' 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2063593 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2063593 ']' 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2063593 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2063593 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2063593' 00:14:26.988 killing process with pid 2063593 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2063593 00:14:26.988 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2063593 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.248 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.249 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.249 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.249 18:26:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:29.162 00:14:29.162 real 0m21.259s 00:14:29.162 user 0m42.012s 00:14:29.162 sys 0m9.457s 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:29.162 ************************************ 00:14:29.162 END TEST nvmf_connect_stress 00:14:29.162 ************************************ 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:29.162 ************************************ 00:14:29.162 START TEST nvmf_fused_ordering 00:14:29.162 ************************************ 00:14:29.162 18:26:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:29.425 * Looking for test storage... 00:14:29.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.425 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:29.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.426 --rc genhtml_branch_coverage=1 00:14:29.426 --rc genhtml_function_coverage=1 00:14:29.426 --rc genhtml_legend=1 00:14:29.426 --rc geninfo_all_blocks=1 00:14:29.426 --rc geninfo_unexecuted_blocks=1 00:14:29.426 00:14:29.426 ' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:29.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.426 --rc genhtml_branch_coverage=1 00:14:29.426 --rc genhtml_function_coverage=1 00:14:29.426 --rc genhtml_legend=1 00:14:29.426 --rc geninfo_all_blocks=1 00:14:29.426 --rc geninfo_unexecuted_blocks=1 00:14:29.426 00:14:29.426 ' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:29.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.426 --rc genhtml_branch_coverage=1 00:14:29.426 --rc genhtml_function_coverage=1 00:14:29.426 --rc genhtml_legend=1 00:14:29.426 --rc geninfo_all_blocks=1 00:14:29.426 --rc geninfo_unexecuted_blocks=1 00:14:29.426 00:14:29.426 ' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:29.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.426 --rc genhtml_branch_coverage=1 00:14:29.426 --rc genhtml_function_coverage=1 00:14:29.426 --rc genhtml_legend=1 00:14:29.426 --rc geninfo_all_blocks=1 00:14:29.426 --rc geninfo_unexecuted_blocks=1 00:14:29.426 00:14:29.426 ' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:29.426 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:29.427 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:29.427 18:26:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:37.578 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:37.579 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:37.579 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:37.579 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:37.579 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:14:37.579 00:14:37.579 --- 10.0.0.2 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:14:37.579 00:14:37.579 --- 10.0.0.1 ping statistics --- 00:14:37.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.579 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.579 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2070089 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2070089 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2070089 ']' 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:37.580 18:26:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.580 [2024-12-06 18:26:31.791909] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:14:37.580 [2024-12-06 18:26:31.791979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.580 [2024-12-06 18:26:31.892329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.580 [2024-12-06 18:26:31.942209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.580 [2024-12-06 18:26:31.942261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.580 [2024-12-06 18:26:31.942270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.580 [2024-12-06 18:26:31.942277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.580 [2024-12-06 18:26:31.942284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.580 [2024-12-06 18:26:31.943084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.841 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.841 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:37.841 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.841 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.841 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 [2024-12-06 18:26:32.650290] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 [2024-12-06 18:26:32.674533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 NULL1 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.102 18:26:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:38.102 [2024-12-06 18:26:32.745684] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:14:38.102 [2024-12-06 18:26:32.745766] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070135 ] 00:14:38.675 Attached to nqn.2016-06.io.spdk:cnode1 00:14:38.675 Namespace ID: 1 size: 1GB 00:14:38.675 fused_ordering(0) 00:14:38.675 fused_ordering(1) 00:14:38.675 fused_ordering(2) 00:14:38.675 fused_ordering(3) 00:14:38.675 fused_ordering(4) 00:14:38.675 fused_ordering(5) 00:14:38.675 fused_ordering(6) 00:14:38.675 fused_ordering(7) 00:14:38.675 fused_ordering(8) 00:14:38.675 fused_ordering(9) 00:14:38.675 fused_ordering(10) 00:14:38.675 fused_ordering(11) 00:14:38.675 fused_ordering(12) 00:14:38.675 fused_ordering(13) 00:14:38.675 fused_ordering(14) 00:14:38.675 fused_ordering(15) 00:14:38.675 fused_ordering(16) 00:14:38.675 fused_ordering(17) 00:14:38.675 fused_ordering(18) 00:14:38.675 fused_ordering(19) 00:14:38.675 fused_ordering(20) 00:14:38.675 fused_ordering(21) 00:14:38.675 fused_ordering(22) 00:14:38.675 fused_ordering(23) 00:14:38.675 fused_ordering(24) 00:14:38.675 fused_ordering(25) 00:14:38.675 fused_ordering(26) 00:14:38.675 fused_ordering(27) 00:14:38.675 fused_ordering(28) 00:14:38.675 fused_ordering(29) 00:14:38.675 fused_ordering(30) 00:14:38.675 fused_ordering(31) 00:14:38.675 fused_ordering(32) 00:14:38.675 fused_ordering(33) 00:14:38.675 fused_ordering(34) 00:14:38.675 fused_ordering(35) 00:14:38.675 fused_ordering(36) 00:14:38.675 fused_ordering(37) 00:14:38.675 fused_ordering(38) 00:14:38.675 fused_ordering(39) 00:14:38.675 fused_ordering(40) 00:14:38.675 fused_ordering(41) 00:14:38.675 fused_ordering(42) 00:14:38.675 fused_ordering(43) 00:14:38.675 fused_ordering(44) 00:14:38.675 fused_ordering(45) 00:14:38.675 fused_ordering(46) 00:14:38.675 fused_ordering(47) 00:14:38.675 fused_ordering(48) 00:14:38.675 fused_ordering(49) 00:14:38.675 fused_ordering(50) 00:14:38.675 fused_ordering(51) 00:14:38.675 fused_ordering(52) 00:14:38.675 fused_ordering(53) 00:14:38.675 fused_ordering(54) 00:14:38.675 fused_ordering(55) 00:14:38.675 fused_ordering(56) 00:14:38.675 fused_ordering(57) 00:14:38.675 fused_ordering(58) 00:14:38.675 fused_ordering(59) 00:14:38.675 fused_ordering(60) 00:14:38.675 fused_ordering(61) 00:14:38.675 fused_ordering(62) 00:14:38.675 fused_ordering(63) 00:14:38.675 fused_ordering(64) 00:14:38.675 fused_ordering(65) 00:14:38.675 fused_ordering(66) 00:14:38.675 fused_ordering(67) 00:14:38.675 fused_ordering(68) 00:14:38.675 fused_ordering(69) 00:14:38.675 fused_ordering(70) 00:14:38.675 fused_ordering(71) 00:14:38.675 fused_ordering(72) 00:14:38.675 fused_ordering(73) 00:14:38.675 fused_ordering(74) 00:14:38.675 fused_ordering(75) 00:14:38.675 fused_ordering(76) 00:14:38.675 fused_ordering(77) 00:14:38.675 fused_ordering(78) 00:14:38.675 fused_ordering(79) 00:14:38.675 fused_ordering(80) 00:14:38.675 fused_ordering(81) 00:14:38.675 fused_ordering(82) 00:14:38.675 fused_ordering(83) 00:14:38.675 fused_ordering(84) 00:14:38.675 fused_ordering(85) 00:14:38.675 fused_ordering(86) 00:14:38.675 fused_ordering(87) 00:14:38.675 fused_ordering(88) 00:14:38.676 fused_ordering(89) 00:14:38.676 fused_ordering(90) 00:14:38.676 fused_ordering(91) 00:14:38.676 fused_ordering(92) 00:14:38.676 fused_ordering(93) 00:14:38.676 fused_ordering(94) 00:14:38.676 fused_ordering(95) 00:14:38.676 fused_ordering(96) 00:14:38.676 fused_ordering(97) 00:14:38.676 fused_ordering(98) 00:14:38.676 fused_ordering(99) 00:14:38.676 fused_ordering(100) 00:14:38.676 fused_ordering(101) 00:14:38.676 fused_ordering(102) 00:14:38.676 fused_ordering(103) 00:14:38.676 fused_ordering(104) 00:14:38.676 fused_ordering(105) 00:14:38.676 fused_ordering(106) 00:14:38.676 fused_ordering(107) 00:14:38.676 fused_ordering(108) 00:14:38.676 fused_ordering(109) 00:14:38.676 fused_ordering(110) 00:14:38.676 fused_ordering(111) 00:14:38.676 fused_ordering(112) 00:14:38.676 fused_ordering(113) 00:14:38.676 fused_ordering(114) 00:14:38.676 fused_ordering(115) 00:14:38.676 fused_ordering(116) 00:14:38.676 fused_ordering(117) 00:14:38.676 fused_ordering(118) 00:14:38.676 fused_ordering(119) 00:14:38.676 fused_ordering(120) 00:14:38.676 fused_ordering(121) 00:14:38.676 fused_ordering(122) 00:14:38.676 fused_ordering(123) 00:14:38.676 fused_ordering(124) 00:14:38.676 fused_ordering(125) 00:14:38.676 fused_ordering(126) 00:14:38.676 fused_ordering(127) 00:14:38.676 fused_ordering(128) 00:14:38.676 fused_ordering(129) 00:14:38.676 fused_ordering(130) 00:14:38.676 fused_ordering(131) 00:14:38.676 fused_ordering(132) 00:14:38.676 fused_ordering(133) 00:14:38.676 fused_ordering(134) 00:14:38.676 fused_ordering(135) 00:14:38.676 fused_ordering(136) 00:14:38.676 fused_ordering(137) 00:14:38.676 fused_ordering(138) 00:14:38.676 fused_ordering(139) 00:14:38.676 fused_ordering(140) 00:14:38.676 fused_ordering(141) 00:14:38.676 fused_ordering(142) 00:14:38.676 fused_ordering(143) 00:14:38.676 fused_ordering(144) 00:14:38.676 fused_ordering(145) 00:14:38.676 fused_ordering(146) 00:14:38.676 fused_ordering(147) 00:14:38.676 fused_ordering(148) 00:14:38.676 fused_ordering(149) 00:14:38.676 fused_ordering(150) 00:14:38.676 fused_ordering(151) 00:14:38.676 fused_ordering(152) 00:14:38.676 fused_ordering(153) 00:14:38.676 fused_ordering(154) 00:14:38.676 fused_ordering(155) 00:14:38.676 fused_ordering(156) 00:14:38.676 fused_ordering(157) 00:14:38.676 fused_ordering(158) 00:14:38.676 fused_ordering(159) 00:14:38.676 fused_ordering(160) 00:14:38.676 fused_ordering(161) 00:14:38.676 fused_ordering(162) 00:14:38.676 fused_ordering(163) 00:14:38.676 fused_ordering(164) 00:14:38.676 fused_ordering(165) 00:14:38.676 fused_ordering(166) 00:14:38.676 fused_ordering(167) 00:14:38.676 fused_ordering(168) 00:14:38.676 fused_ordering(169) 00:14:38.676 fused_ordering(170) 00:14:38.676 fused_ordering(171) 00:14:38.676 fused_ordering(172) 00:14:38.676 fused_ordering(173) 00:14:38.676 fused_ordering(174) 00:14:38.676 fused_ordering(175) 00:14:38.676 fused_ordering(176) 00:14:38.676 fused_ordering(177) 00:14:38.676 fused_ordering(178) 00:14:38.676 fused_ordering(179) 00:14:38.676 fused_ordering(180) 00:14:38.676 fused_ordering(181) 00:14:38.676 fused_ordering(182) 00:14:38.676 fused_ordering(183) 00:14:38.676 fused_ordering(184) 00:14:38.676 fused_ordering(185) 00:14:38.676 fused_ordering(186) 00:14:38.676 fused_ordering(187) 00:14:38.676 fused_ordering(188) 00:14:38.676 fused_ordering(189) 00:14:38.676 fused_ordering(190) 00:14:38.676 fused_ordering(191) 00:14:38.676 fused_ordering(192) 00:14:38.676 fused_ordering(193) 00:14:38.676 fused_ordering(194) 00:14:38.676 fused_ordering(195) 00:14:38.676 fused_ordering(196) 00:14:38.676 fused_ordering(197) 00:14:38.676 fused_ordering(198) 00:14:38.676 fused_ordering(199) 00:14:38.676 fused_ordering(200) 00:14:38.676 fused_ordering(201) 00:14:38.676 fused_ordering(202) 00:14:38.676 fused_ordering(203) 00:14:38.676 fused_ordering(204) 00:14:38.676 fused_ordering(205) 00:14:38.938 fused_ordering(206) 00:14:38.938 fused_ordering(207) 00:14:38.938 fused_ordering(208) 00:14:38.938 fused_ordering(209) 00:14:38.938 fused_ordering(210) 00:14:38.938 fused_ordering(211) 00:14:38.938 fused_ordering(212) 00:14:38.938 fused_ordering(213) 00:14:38.938 fused_ordering(214) 00:14:38.938 fused_ordering(215) 00:14:38.938 fused_ordering(216) 00:14:38.938 fused_ordering(217) 00:14:38.938 fused_ordering(218) 00:14:38.938 fused_ordering(219) 00:14:38.938 fused_ordering(220) 00:14:38.938 fused_ordering(221) 00:14:38.938 fused_ordering(222) 00:14:38.938 fused_ordering(223) 00:14:38.938 fused_ordering(224) 00:14:38.938 fused_ordering(225) 00:14:38.938 fused_ordering(226) 00:14:38.938 fused_ordering(227) 00:14:38.938 fused_ordering(228) 00:14:38.938 fused_ordering(229) 00:14:38.938 fused_ordering(230) 00:14:38.938 fused_ordering(231) 00:14:38.938 fused_ordering(232) 00:14:38.938 fused_ordering(233) 00:14:38.938 fused_ordering(234) 00:14:38.938 fused_ordering(235) 00:14:38.938 fused_ordering(236) 00:14:38.938 fused_ordering(237) 00:14:38.938 fused_ordering(238) 00:14:38.938 fused_ordering(239) 00:14:38.938 fused_ordering(240) 00:14:38.938 fused_ordering(241) 00:14:38.938 fused_ordering(242) 00:14:38.938 fused_ordering(243) 00:14:38.938 fused_ordering(244) 00:14:38.938 fused_ordering(245) 00:14:38.938 fused_ordering(246) 00:14:38.938 fused_ordering(247) 00:14:38.938 fused_ordering(248) 00:14:38.938 fused_ordering(249) 00:14:38.938 fused_ordering(250) 00:14:38.938 fused_ordering(251) 00:14:38.938 fused_ordering(252) 00:14:38.938 fused_ordering(253) 00:14:38.938 fused_ordering(254) 00:14:38.938 fused_ordering(255) 00:14:38.938 fused_ordering(256) 00:14:38.938 fused_ordering(257) 00:14:38.938 fused_ordering(258) 00:14:38.938 fused_ordering(259) 00:14:38.938 fused_ordering(260) 00:14:38.938 fused_ordering(261) 00:14:38.938 fused_ordering(262) 00:14:38.938 fused_ordering(263) 00:14:38.938 fused_ordering(264) 00:14:38.938 fused_ordering(265) 00:14:38.938 fused_ordering(266) 00:14:38.938 fused_ordering(267) 00:14:38.938 fused_ordering(268) 00:14:38.938 fused_ordering(269) 00:14:38.938 fused_ordering(270) 00:14:38.938 fused_ordering(271) 00:14:38.938 fused_ordering(272) 00:14:38.938 fused_ordering(273) 00:14:38.938 fused_ordering(274) 00:14:38.938 fused_ordering(275) 00:14:38.938 fused_ordering(276) 00:14:38.938 fused_ordering(277) 00:14:38.938 fused_ordering(278) 00:14:38.938 fused_ordering(279) 00:14:38.938 fused_ordering(280) 00:14:38.938 fused_ordering(281) 00:14:38.938 fused_ordering(282) 00:14:38.938 fused_ordering(283) 00:14:38.938 fused_ordering(284) 00:14:38.938 fused_ordering(285) 00:14:38.938 fused_ordering(286) 00:14:38.938 fused_ordering(287) 00:14:38.938 fused_ordering(288) 00:14:38.938 fused_ordering(289) 00:14:38.938 fused_ordering(290) 00:14:38.938 fused_ordering(291) 00:14:38.938 fused_ordering(292) 00:14:38.938 fused_ordering(293) 00:14:38.938 fused_ordering(294) 00:14:38.938 fused_ordering(295) 00:14:38.938 fused_ordering(296) 00:14:38.938 fused_ordering(297) 00:14:38.938 fused_ordering(298) 00:14:38.938 fused_ordering(299) 00:14:38.938 fused_ordering(300) 00:14:38.938 fused_ordering(301) 00:14:38.938 fused_ordering(302) 00:14:38.938 fused_ordering(303) 00:14:38.938 fused_ordering(304) 00:14:38.938 fused_ordering(305) 00:14:38.938 fused_ordering(306) 00:14:38.938 fused_ordering(307) 00:14:38.938 fused_ordering(308) 00:14:38.938 fused_ordering(309) 00:14:38.938 fused_ordering(310) 00:14:38.938 fused_ordering(311) 00:14:38.938 fused_ordering(312) 00:14:38.938 fused_ordering(313) 00:14:38.938 fused_ordering(314) 00:14:38.938 fused_ordering(315) 00:14:38.938 fused_ordering(316) 00:14:38.938 fused_ordering(317) 00:14:38.938 fused_ordering(318) 00:14:38.938 fused_ordering(319) 00:14:38.938 fused_ordering(320) 00:14:38.938 fused_ordering(321) 00:14:38.938 fused_ordering(322) 00:14:38.938 fused_ordering(323) 00:14:38.938 fused_ordering(324) 00:14:38.938 fused_ordering(325) 00:14:38.938 fused_ordering(326) 00:14:38.938 fused_ordering(327) 00:14:38.938 fused_ordering(328) 00:14:38.938 fused_ordering(329) 00:14:38.938 fused_ordering(330) 00:14:38.938 fused_ordering(331) 00:14:38.938 fused_ordering(332) 00:14:38.938 fused_ordering(333) 00:14:38.938 fused_ordering(334) 00:14:38.938 fused_ordering(335) 00:14:38.938 fused_ordering(336) 00:14:38.938 fused_ordering(337) 00:14:38.938 fused_ordering(338) 00:14:38.938 fused_ordering(339) 00:14:38.938 fused_ordering(340) 00:14:38.938 fused_ordering(341) 00:14:38.938 fused_ordering(342) 00:14:38.938 fused_ordering(343) 00:14:38.938 fused_ordering(344) 00:14:38.938 fused_ordering(345) 00:14:38.938 fused_ordering(346) 00:14:38.938 fused_ordering(347) 00:14:38.938 fused_ordering(348) 00:14:38.939 fused_ordering(349) 00:14:38.939 fused_ordering(350) 00:14:38.939 fused_ordering(351) 00:14:38.939 fused_ordering(352) 00:14:38.939 fused_ordering(353) 00:14:38.939 fused_ordering(354) 00:14:38.939 fused_ordering(355) 00:14:38.939 fused_ordering(356) 00:14:38.939 fused_ordering(357) 00:14:38.939 fused_ordering(358) 00:14:38.939 fused_ordering(359) 00:14:38.939 fused_ordering(360) 00:14:38.939 fused_ordering(361) 00:14:38.939 fused_ordering(362) 00:14:38.939 fused_ordering(363) 00:14:38.939 fused_ordering(364) 00:14:38.939 fused_ordering(365) 00:14:38.939 fused_ordering(366) 00:14:38.939 fused_ordering(367) 00:14:38.939 fused_ordering(368) 00:14:38.939 fused_ordering(369) 00:14:38.939 fused_ordering(370) 00:14:38.939 fused_ordering(371) 00:14:38.939 fused_ordering(372) 00:14:38.939 fused_ordering(373) 00:14:38.939 fused_ordering(374) 00:14:38.939 fused_ordering(375) 00:14:38.939 fused_ordering(376) 00:14:38.939 fused_ordering(377) 00:14:38.939 fused_ordering(378) 00:14:38.939 fused_ordering(379) 00:14:38.939 fused_ordering(380) 00:14:38.939 fused_ordering(381) 00:14:38.939 fused_ordering(382) 00:14:38.939 fused_ordering(383) 00:14:38.939 fused_ordering(384) 00:14:38.939 fused_ordering(385) 00:14:38.939 fused_ordering(386) 00:14:38.939 fused_ordering(387) 00:14:38.939 fused_ordering(388) 00:14:38.939 fused_ordering(389) 00:14:38.939 fused_ordering(390) 00:14:38.939 fused_ordering(391) 00:14:38.939 fused_ordering(392) 00:14:38.939 fused_ordering(393) 00:14:38.939 fused_ordering(394) 00:14:38.939 fused_ordering(395) 00:14:38.939 fused_ordering(396) 00:14:38.939 fused_ordering(397) 00:14:38.939 fused_ordering(398) 00:14:38.939 fused_ordering(399) 00:14:38.939 fused_ordering(400) 00:14:38.939 fused_ordering(401) 00:14:38.939 fused_ordering(402) 00:14:38.939 fused_ordering(403) 00:14:38.939 fused_ordering(404) 00:14:38.939 fused_ordering(405) 00:14:38.939 fused_ordering(406) 00:14:38.939 fused_ordering(407) 00:14:38.939 fused_ordering(408) 00:14:38.939 fused_ordering(409) 00:14:38.939 fused_ordering(410) 00:14:39.200 fused_ordering(411) 00:14:39.200 fused_ordering(412) 00:14:39.200 fused_ordering(413) 00:14:39.200 fused_ordering(414) 00:14:39.200 fused_ordering(415) 00:14:39.200 fused_ordering(416) 00:14:39.200 fused_ordering(417) 00:14:39.200 fused_ordering(418) 00:14:39.200 fused_ordering(419) 00:14:39.200 fused_ordering(420) 00:14:39.200 fused_ordering(421) 00:14:39.200 fused_ordering(422) 00:14:39.200 fused_ordering(423) 00:14:39.200 fused_ordering(424) 00:14:39.200 fused_ordering(425) 00:14:39.200 fused_ordering(426) 00:14:39.200 fused_ordering(427) 00:14:39.200 fused_ordering(428) 00:14:39.200 fused_ordering(429) 00:14:39.200 fused_ordering(430) 00:14:39.200 fused_ordering(431) 00:14:39.200 fused_ordering(432) 00:14:39.200 fused_ordering(433) 00:14:39.200 fused_ordering(434) 00:14:39.200 fused_ordering(435) 00:14:39.200 fused_ordering(436) 00:14:39.200 fused_ordering(437) 00:14:39.200 fused_ordering(438) 00:14:39.200 fused_ordering(439) 00:14:39.200 fused_ordering(440) 00:14:39.200 fused_ordering(441) 00:14:39.200 fused_ordering(442) 00:14:39.200 fused_ordering(443) 00:14:39.200 fused_ordering(444) 00:14:39.200 fused_ordering(445) 00:14:39.200 fused_ordering(446) 00:14:39.200 fused_ordering(447) 00:14:39.200 fused_ordering(448) 00:14:39.200 fused_ordering(449) 00:14:39.200 fused_ordering(450) 00:14:39.200 fused_ordering(451) 00:14:39.200 fused_ordering(452) 00:14:39.200 fused_ordering(453) 00:14:39.200 fused_ordering(454) 00:14:39.200 fused_ordering(455) 00:14:39.200 fused_ordering(456) 00:14:39.200 fused_ordering(457) 00:14:39.200 fused_ordering(458) 00:14:39.200 fused_ordering(459) 00:14:39.200 fused_ordering(460) 00:14:39.200 fused_ordering(461) 00:14:39.200 fused_ordering(462) 00:14:39.200 fused_ordering(463) 00:14:39.200 fused_ordering(464) 00:14:39.200 fused_ordering(465) 00:14:39.200 fused_ordering(466) 00:14:39.200 fused_ordering(467) 00:14:39.200 fused_ordering(468) 00:14:39.200 fused_ordering(469) 00:14:39.200 fused_ordering(470) 00:14:39.200 fused_ordering(471) 00:14:39.200 fused_ordering(472) 00:14:39.200 fused_ordering(473) 00:14:39.200 fused_ordering(474) 00:14:39.200 fused_ordering(475) 00:14:39.200 fused_ordering(476) 00:14:39.200 fused_ordering(477) 00:14:39.200 fused_ordering(478) 00:14:39.200 fused_ordering(479) 00:14:39.200 fused_ordering(480) 00:14:39.200 fused_ordering(481) 00:14:39.200 fused_ordering(482) 00:14:39.200 fused_ordering(483) 00:14:39.200 fused_ordering(484) 00:14:39.200 fused_ordering(485) 00:14:39.200 fused_ordering(486) 00:14:39.200 fused_ordering(487) 00:14:39.200 fused_ordering(488) 00:14:39.200 fused_ordering(489) 00:14:39.200 fused_ordering(490) 00:14:39.200 fused_ordering(491) 00:14:39.200 fused_ordering(492) 00:14:39.200 fused_ordering(493) 00:14:39.200 fused_ordering(494) 00:14:39.200 fused_ordering(495) 00:14:39.200 fused_ordering(496) 00:14:39.200 fused_ordering(497) 00:14:39.200 fused_ordering(498) 00:14:39.200 fused_ordering(499) 00:14:39.200 fused_ordering(500) 00:14:39.200 fused_ordering(501) 00:14:39.200 fused_ordering(502) 00:14:39.200 fused_ordering(503) 00:14:39.200 fused_ordering(504) 00:14:39.200 fused_ordering(505) 00:14:39.200 fused_ordering(506) 00:14:39.200 fused_ordering(507) 00:14:39.200 fused_ordering(508) 00:14:39.200 fused_ordering(509) 00:14:39.200 fused_ordering(510) 00:14:39.200 fused_ordering(511) 00:14:39.200 fused_ordering(512) 00:14:39.200 fused_ordering(513) 00:14:39.200 fused_ordering(514) 00:14:39.200 fused_ordering(515) 00:14:39.200 fused_ordering(516) 00:14:39.200 fused_ordering(517) 00:14:39.200 fused_ordering(518) 00:14:39.200 fused_ordering(519) 00:14:39.200 fused_ordering(520) 00:14:39.200 fused_ordering(521) 00:14:39.200 fused_ordering(522) 00:14:39.200 fused_ordering(523) 00:14:39.200 fused_ordering(524) 00:14:39.200 fused_ordering(525) 00:14:39.200 fused_ordering(526) 00:14:39.200 fused_ordering(527) 00:14:39.200 fused_ordering(528) 00:14:39.200 fused_ordering(529) 00:14:39.200 fused_ordering(530) 00:14:39.200 fused_ordering(531) 00:14:39.200 fused_ordering(532) 00:14:39.200 fused_ordering(533) 00:14:39.200 fused_ordering(534) 00:14:39.200 fused_ordering(535) 00:14:39.200 fused_ordering(536) 00:14:39.200 fused_ordering(537) 00:14:39.200 fused_ordering(538) 00:14:39.200 fused_ordering(539) 00:14:39.200 fused_ordering(540) 00:14:39.200 fused_ordering(541) 00:14:39.200 fused_ordering(542) 00:14:39.200 fused_ordering(543) 00:14:39.200 fused_ordering(544) 00:14:39.200 fused_ordering(545) 00:14:39.200 fused_ordering(546) 00:14:39.200 fused_ordering(547) 00:14:39.200 fused_ordering(548) 00:14:39.200 fused_ordering(549) 00:14:39.200 fused_ordering(550) 00:14:39.200 fused_ordering(551) 00:14:39.200 fused_ordering(552) 00:14:39.200 fused_ordering(553) 00:14:39.200 fused_ordering(554) 00:14:39.200 fused_ordering(555) 00:14:39.200 fused_ordering(556) 00:14:39.200 fused_ordering(557) 00:14:39.200 fused_ordering(558) 00:14:39.200 fused_ordering(559) 00:14:39.200 fused_ordering(560) 00:14:39.200 fused_ordering(561) 00:14:39.200 fused_ordering(562) 00:14:39.200 fused_ordering(563) 00:14:39.200 fused_ordering(564) 00:14:39.200 fused_ordering(565) 00:14:39.200 fused_ordering(566) 00:14:39.200 fused_ordering(567) 00:14:39.200 fused_ordering(568) 00:14:39.200 fused_ordering(569) 00:14:39.200 fused_ordering(570) 00:14:39.200 fused_ordering(571) 00:14:39.200 fused_ordering(572) 00:14:39.200 fused_ordering(573) 00:14:39.200 fused_ordering(574) 00:14:39.200 fused_ordering(575) 00:14:39.200 fused_ordering(576) 00:14:39.200 fused_ordering(577) 00:14:39.200 fused_ordering(578) 00:14:39.200 fused_ordering(579) 00:14:39.200 fused_ordering(580) 00:14:39.200 fused_ordering(581) 00:14:39.200 fused_ordering(582) 00:14:39.200 fused_ordering(583) 00:14:39.200 fused_ordering(584) 00:14:39.200 fused_ordering(585) 00:14:39.200 fused_ordering(586) 00:14:39.200 fused_ordering(587) 00:14:39.200 fused_ordering(588) 00:14:39.200 fused_ordering(589) 00:14:39.200 fused_ordering(590) 00:14:39.200 fused_ordering(591) 00:14:39.200 fused_ordering(592) 00:14:39.200 fused_ordering(593) 00:14:39.200 fused_ordering(594) 00:14:39.200 fused_ordering(595) 00:14:39.200 fused_ordering(596) 00:14:39.200 fused_ordering(597) 00:14:39.200 fused_ordering(598) 00:14:39.200 fused_ordering(599) 00:14:39.200 fused_ordering(600) 00:14:39.200 fused_ordering(601) 00:14:39.200 fused_ordering(602) 00:14:39.200 fused_ordering(603) 00:14:39.200 fused_ordering(604) 00:14:39.200 fused_ordering(605) 00:14:39.200 fused_ordering(606) 00:14:39.200 fused_ordering(607) 00:14:39.200 fused_ordering(608) 00:14:39.200 fused_ordering(609) 00:14:39.200 fused_ordering(610) 00:14:39.200 fused_ordering(611) 00:14:39.200 fused_ordering(612) 00:14:39.200 fused_ordering(613) 00:14:39.200 fused_ordering(614) 00:14:39.200 fused_ordering(615) 00:14:39.773 fused_ordering(616) 00:14:39.773 fused_ordering(617) 00:14:39.773 fused_ordering(618) 00:14:39.773 fused_ordering(619) 00:14:39.773 fused_ordering(620) 00:14:39.773 fused_ordering(621) 00:14:39.773 fused_ordering(622) 00:14:39.773 fused_ordering(623) 00:14:39.773 fused_ordering(624) 00:14:39.773 fused_ordering(625) 00:14:39.773 fused_ordering(626) 00:14:39.773 fused_ordering(627) 00:14:39.773 fused_ordering(628) 00:14:39.773 fused_ordering(629) 00:14:39.773 fused_ordering(630) 00:14:39.773 fused_ordering(631) 00:14:39.773 fused_ordering(632) 00:14:39.773 fused_ordering(633) 00:14:39.773 fused_ordering(634) 00:14:39.773 fused_ordering(635) 00:14:39.773 fused_ordering(636) 00:14:39.773 fused_ordering(637) 00:14:39.773 fused_ordering(638) 00:14:39.773 fused_ordering(639) 00:14:39.773 fused_ordering(640) 00:14:39.773 fused_ordering(641) 00:14:39.773 fused_ordering(642) 00:14:39.773 fused_ordering(643) 00:14:39.773 fused_ordering(644) 00:14:39.773 fused_ordering(645) 00:14:39.773 fused_ordering(646) 00:14:39.773 fused_ordering(647) 00:14:39.773 fused_ordering(648) 00:14:39.773 fused_ordering(649) 00:14:39.773 fused_ordering(650) 00:14:39.773 fused_ordering(651) 00:14:39.773 fused_ordering(652) 00:14:39.773 fused_ordering(653) 00:14:39.773 fused_ordering(654) 00:14:39.773 fused_ordering(655) 00:14:39.773 fused_ordering(656) 00:14:39.773 fused_ordering(657) 00:14:39.774 fused_ordering(658) 00:14:39.774 fused_ordering(659) 00:14:39.774 fused_ordering(660) 00:14:39.774 fused_ordering(661) 00:14:39.774 fused_ordering(662) 00:14:39.774 fused_ordering(663) 00:14:39.774 fused_ordering(664) 00:14:39.774 fused_ordering(665) 00:14:39.774 fused_ordering(666) 00:14:39.774 fused_ordering(667) 00:14:39.774 fused_ordering(668) 00:14:39.774 fused_ordering(669) 00:14:39.774 fused_ordering(670) 00:14:39.774 fused_ordering(671) 00:14:39.774 fused_ordering(672) 00:14:39.774 fused_ordering(673) 00:14:39.774 fused_ordering(674) 00:14:39.774 fused_ordering(675) 00:14:39.774 fused_ordering(676) 00:14:39.774 fused_ordering(677) 00:14:39.774 fused_ordering(678) 00:14:39.774 fused_ordering(679) 00:14:39.774 fused_ordering(680) 00:14:39.774 fused_ordering(681) 00:14:39.774 fused_ordering(682) 00:14:39.774 fused_ordering(683) 00:14:39.774 fused_ordering(684) 00:14:39.774 fused_ordering(685) 00:14:39.774 fused_ordering(686) 00:14:39.774 fused_ordering(687) 00:14:39.774 fused_ordering(688) 00:14:39.774 fused_ordering(689) 00:14:39.774 fused_ordering(690) 00:14:39.774 fused_ordering(691) 00:14:39.774 fused_ordering(692) 00:14:39.774 fused_ordering(693) 00:14:39.774 fused_ordering(694) 00:14:39.774 fused_ordering(695) 00:14:39.774 fused_ordering(696) 00:14:39.774 fused_ordering(697) 00:14:39.774 fused_ordering(698) 00:14:39.774 fused_ordering(699) 00:14:39.774 fused_ordering(700) 00:14:39.774 fused_ordering(701) 00:14:39.774 fused_ordering(702) 00:14:39.774 fused_ordering(703) 00:14:39.774 fused_ordering(704) 00:14:39.774 fused_ordering(705) 00:14:39.774 fused_ordering(706) 00:14:39.774 fused_ordering(707) 00:14:39.774 fused_ordering(708) 00:14:39.774 fused_ordering(709) 00:14:39.774 fused_ordering(710) 00:14:39.774 fused_ordering(711) 00:14:39.774 fused_ordering(712) 00:14:39.774 fused_ordering(713) 00:14:39.774 fused_ordering(714) 00:14:39.774 fused_ordering(715) 00:14:39.774 fused_ordering(716) 00:14:39.774 fused_ordering(717) 00:14:39.774 fused_ordering(718) 00:14:39.774 fused_ordering(719) 00:14:39.774 fused_ordering(720) 00:14:39.774 fused_ordering(721) 00:14:39.774 fused_ordering(722) 00:14:39.774 fused_ordering(723) 00:14:39.774 fused_ordering(724) 00:14:39.774 fused_ordering(725) 00:14:39.774 fused_ordering(726) 00:14:39.774 fused_ordering(727) 00:14:39.774 fused_ordering(728) 00:14:39.774 fused_ordering(729) 00:14:39.774 fused_ordering(730) 00:14:39.774 fused_ordering(731) 00:14:39.774 fused_ordering(732) 00:14:39.774 fused_ordering(733) 00:14:39.774 fused_ordering(734) 00:14:39.774 fused_ordering(735) 00:14:39.774 fused_ordering(736) 00:14:39.774 fused_ordering(737) 00:14:39.774 fused_ordering(738) 00:14:39.774 fused_ordering(739) 00:14:39.774 fused_ordering(740) 00:14:39.774 fused_ordering(741) 00:14:39.774 fused_ordering(742) 00:14:39.774 fused_ordering(743) 00:14:39.774 fused_ordering(744) 00:14:39.774 fused_ordering(745) 00:14:39.774 fused_ordering(746) 00:14:39.774 fused_ordering(747) 00:14:39.774 fused_ordering(748) 00:14:39.774 fused_ordering(749) 00:14:39.774 fused_ordering(750) 00:14:39.774 fused_ordering(751) 00:14:39.774 fused_ordering(752) 00:14:39.774 fused_ordering(753) 00:14:39.774 fused_ordering(754) 00:14:39.774 fused_ordering(755) 00:14:39.774 fused_ordering(756) 00:14:39.774 fused_ordering(757) 00:14:39.774 fused_ordering(758) 00:14:39.774 fused_ordering(759) 00:14:39.774 fused_ordering(760) 00:14:39.774 fused_ordering(761) 00:14:39.774 fused_ordering(762) 00:14:39.774 fused_ordering(763) 00:14:39.774 fused_ordering(764) 00:14:39.774 fused_ordering(765) 00:14:39.774 fused_ordering(766) 00:14:39.774 fused_ordering(767) 00:14:39.774 fused_ordering(768) 00:14:39.774 fused_ordering(769) 00:14:39.774 fused_ordering(770) 00:14:39.774 fused_ordering(771) 00:14:39.774 fused_ordering(772) 00:14:39.774 fused_ordering(773) 00:14:39.774 fused_ordering(774) 00:14:39.774 fused_ordering(775) 00:14:39.774 fused_ordering(776) 00:14:39.774 fused_ordering(777) 00:14:39.774 fused_ordering(778) 00:14:39.774 fused_ordering(779) 00:14:39.774 fused_ordering(780) 00:14:39.774 fused_ordering(781) 00:14:39.774 fused_ordering(782) 00:14:39.774 fused_ordering(783) 00:14:39.774 fused_ordering(784) 00:14:39.774 fused_ordering(785) 00:14:39.774 fused_ordering(786) 00:14:39.774 fused_ordering(787) 00:14:39.774 fused_ordering(788) 00:14:39.774 fused_ordering(789) 00:14:39.774 fused_ordering(790) 00:14:39.774 fused_ordering(791) 00:14:39.774 fused_ordering(792) 00:14:39.774 fused_ordering(793) 00:14:39.774 fused_ordering(794) 00:14:39.774 fused_ordering(795) 00:14:39.774 fused_ordering(796) 00:14:39.774 fused_ordering(797) 00:14:39.774 fused_ordering(798) 00:14:39.774 fused_ordering(799) 00:14:39.774 fused_ordering(800) 00:14:39.774 fused_ordering(801) 00:14:39.774 fused_ordering(802) 00:14:39.774 fused_ordering(803) 00:14:39.774 fused_ordering(804) 00:14:39.774 fused_ordering(805) 00:14:39.774 fused_ordering(806) 00:14:39.774 fused_ordering(807) 00:14:39.774 fused_ordering(808) 00:14:39.774 fused_ordering(809) 00:14:39.774 fused_ordering(810) 00:14:39.774 fused_ordering(811) 00:14:39.774 fused_ordering(812) 00:14:39.774 fused_ordering(813) 00:14:39.774 fused_ordering(814) 00:14:39.774 fused_ordering(815) 00:14:39.774 fused_ordering(816) 00:14:39.774 fused_ordering(817) 00:14:39.774 fused_ordering(818) 00:14:39.774 fused_ordering(819) 00:14:39.774 fused_ordering(820) 00:14:40.348 fused_ordering(821) 00:14:40.348 fused_ordering(822) 00:14:40.348 fused_ordering(823) 00:14:40.348 fused_ordering(824) 00:14:40.348 fused_ordering(825) 00:14:40.348 fused_ordering(826) 00:14:40.348 fused_ordering(827) 00:14:40.348 fused_ordering(828) 00:14:40.348 fused_ordering(829) 00:14:40.348 fused_ordering(830) 00:14:40.348 fused_ordering(831) 00:14:40.348 fused_ordering(832) 00:14:40.348 fused_ordering(833) 00:14:40.348 fused_ordering(834) 00:14:40.348 fused_ordering(835) 00:14:40.348 fused_ordering(836) 00:14:40.348 fused_ordering(837) 00:14:40.348 fused_ordering(838) 00:14:40.348 fused_ordering(839) 00:14:40.348 fused_ordering(840) 00:14:40.348 fused_ordering(841) 00:14:40.348 fused_ordering(842) 00:14:40.348 fused_ordering(843) 00:14:40.348 fused_ordering(844) 00:14:40.348 fused_ordering(845) 00:14:40.348 fused_ordering(846) 00:14:40.348 fused_ordering(847) 00:14:40.348 fused_ordering(848) 00:14:40.348 fused_ordering(849) 00:14:40.348 fused_ordering(850) 00:14:40.348 fused_ordering(851) 00:14:40.348 fused_ordering(852) 00:14:40.348 fused_ordering(853) 00:14:40.348 fused_ordering(854) 00:14:40.348 fused_ordering(855) 00:14:40.348 fused_ordering(856) 00:14:40.348 fused_ordering(857) 00:14:40.348 fused_ordering(858) 00:14:40.348 fused_ordering(859) 00:14:40.348 fused_ordering(860) 00:14:40.348 fused_ordering(861) 00:14:40.348 fused_ordering(862) 00:14:40.348 fused_ordering(863) 00:14:40.348 fused_ordering(864) 00:14:40.348 fused_ordering(865) 00:14:40.348 fused_ordering(866) 00:14:40.348 fused_ordering(867) 00:14:40.348 fused_ordering(868) 00:14:40.348 fused_ordering(869) 00:14:40.348 fused_ordering(870) 00:14:40.348 fused_ordering(871) 00:14:40.348 fused_ordering(872) 00:14:40.348 fused_ordering(873) 00:14:40.348 fused_ordering(874) 00:14:40.348 fused_ordering(875) 00:14:40.348 fused_ordering(876) 00:14:40.348 fused_ordering(877) 00:14:40.348 fused_ordering(878) 00:14:40.348 fused_ordering(879) 00:14:40.348 fused_ordering(880) 00:14:40.348 fused_ordering(881) 00:14:40.348 fused_ordering(882) 00:14:40.348 fused_ordering(883) 00:14:40.348 fused_ordering(884) 00:14:40.348 fused_ordering(885) 00:14:40.348 fused_ordering(886) 00:14:40.348 fused_ordering(887) 00:14:40.348 fused_ordering(888) 00:14:40.348 fused_ordering(889) 00:14:40.348 fused_ordering(890) 00:14:40.348 fused_ordering(891) 00:14:40.348 fused_ordering(892) 00:14:40.348 fused_ordering(893) 00:14:40.348 fused_ordering(894) 00:14:40.348 fused_ordering(895) 00:14:40.348 fused_ordering(896) 00:14:40.348 fused_ordering(897) 00:14:40.348 fused_ordering(898) 00:14:40.348 fused_ordering(899) 00:14:40.348 fused_ordering(900) 00:14:40.348 fused_ordering(901) 00:14:40.348 fused_ordering(902) 00:14:40.348 fused_ordering(903) 00:14:40.348 fused_ordering(904) 00:14:40.348 fused_ordering(905) 00:14:40.348 fused_ordering(906) 00:14:40.348 fused_ordering(907) 00:14:40.348 fused_ordering(908) 00:14:40.348 fused_ordering(909) 00:14:40.348 fused_ordering(910) 00:14:40.348 fused_ordering(911) 00:14:40.348 fused_ordering(912) 00:14:40.348 fused_ordering(913) 00:14:40.348 fused_ordering(914) 00:14:40.348 fused_ordering(915) 00:14:40.348 fused_ordering(916) 00:14:40.348 fused_ordering(917) 00:14:40.348 fused_ordering(918) 00:14:40.348 fused_ordering(919) 00:14:40.348 fused_ordering(920) 00:14:40.348 fused_ordering(921) 00:14:40.348 fused_ordering(922) 00:14:40.348 fused_ordering(923) 00:14:40.348 fused_ordering(924) 00:14:40.348 fused_ordering(925) 00:14:40.348 fused_ordering(926) 00:14:40.348 fused_ordering(927) 00:14:40.348 fused_ordering(928) 00:14:40.348 fused_ordering(929) 00:14:40.348 fused_ordering(930) 00:14:40.348 fused_ordering(931) 00:14:40.348 fused_ordering(932) 00:14:40.348 fused_ordering(933) 00:14:40.348 fused_ordering(934) 00:14:40.348 fused_ordering(935) 00:14:40.348 fused_ordering(936) 00:14:40.348 fused_ordering(937) 00:14:40.348 fused_ordering(938) 00:14:40.348 fused_ordering(939) 00:14:40.348 fused_ordering(940) 00:14:40.348 fused_ordering(941) 00:14:40.348 fused_ordering(942) 00:14:40.348 fused_ordering(943) 00:14:40.348 fused_ordering(944) 00:14:40.348 fused_ordering(945) 00:14:40.348 fused_ordering(946) 00:14:40.349 fused_ordering(947) 00:14:40.349 fused_ordering(948) 00:14:40.349 fused_ordering(949) 00:14:40.349 fused_ordering(950) 00:14:40.349 fused_ordering(951) 00:14:40.349 fused_ordering(952) 00:14:40.349 fused_ordering(953) 00:14:40.349 fused_ordering(954) 00:14:40.349 fused_ordering(955) 00:14:40.349 fused_ordering(956) 00:14:40.349 fused_ordering(957) 00:14:40.349 fused_ordering(958) 00:14:40.349 fused_ordering(959) 00:14:40.349 fused_ordering(960) 00:14:40.349 fused_ordering(961) 00:14:40.349 fused_ordering(962) 00:14:40.349 fused_ordering(963) 00:14:40.349 fused_ordering(964) 00:14:40.349 fused_ordering(965) 00:14:40.349 fused_ordering(966) 00:14:40.349 fused_ordering(967) 00:14:40.349 fused_ordering(968) 00:14:40.349 fused_ordering(969) 00:14:40.349 fused_ordering(970) 00:14:40.349 fused_ordering(971) 00:14:40.349 fused_ordering(972) 00:14:40.349 fused_ordering(973) 00:14:40.349 fused_ordering(974) 00:14:40.349 fused_ordering(975) 00:14:40.349 fused_ordering(976) 00:14:40.349 fused_ordering(977) 00:14:40.349 fused_ordering(978) 00:14:40.349 fused_ordering(979) 00:14:40.349 fused_ordering(980) 00:14:40.349 fused_ordering(981) 00:14:40.349 fused_ordering(982) 00:14:40.349 fused_ordering(983) 00:14:40.349 fused_ordering(984) 00:14:40.349 fused_ordering(985) 00:14:40.349 fused_ordering(986) 00:14:40.349 fused_ordering(987) 00:14:40.349 fused_ordering(988) 00:14:40.349 fused_ordering(989) 00:14:40.349 fused_ordering(990) 00:14:40.349 fused_ordering(991) 00:14:40.349 fused_ordering(992) 00:14:40.349 fused_ordering(993) 00:14:40.349 fused_ordering(994) 00:14:40.349 fused_ordering(995) 00:14:40.349 fused_ordering(996) 00:14:40.349 fused_ordering(997) 00:14:40.349 fused_ordering(998) 00:14:40.349 fused_ordering(999) 00:14:40.349 fused_ordering(1000) 00:14:40.349 fused_ordering(1001) 00:14:40.349 fused_ordering(1002) 00:14:40.349 fused_ordering(1003) 00:14:40.349 fused_ordering(1004) 00:14:40.349 fused_ordering(1005) 00:14:40.349 fused_ordering(1006) 00:14:40.349 fused_ordering(1007) 00:14:40.349 fused_ordering(1008) 00:14:40.349 fused_ordering(1009) 00:14:40.349 fused_ordering(1010) 00:14:40.349 fused_ordering(1011) 00:14:40.349 fused_ordering(1012) 00:14:40.349 fused_ordering(1013) 00:14:40.349 fused_ordering(1014) 00:14:40.349 fused_ordering(1015) 00:14:40.349 fused_ordering(1016) 00:14:40.349 fused_ordering(1017) 00:14:40.349 fused_ordering(1018) 00:14:40.349 fused_ordering(1019) 00:14:40.349 fused_ordering(1020) 00:14:40.349 fused_ordering(1021) 00:14:40.349 fused_ordering(1022) 00:14:40.349 fused_ordering(1023) 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.349 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.349 rmmod nvme_tcp 00:14:40.349 rmmod nvme_fabrics 00:14:40.349 rmmod nvme_keyring 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2070089 ']' 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2070089 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2070089 ']' 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2070089 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070089 00:14:40.608 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070089' 00:14:40.609 killing process with pid 2070089 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2070089 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2070089 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.609 18:26:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:43.151 00:14:43.151 real 0m13.458s 00:14:43.151 user 0m7.029s 00:14:43.151 sys 0m7.220s 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:43.151 ************************************ 00:14:43.151 END TEST nvmf_fused_ordering 00:14:43.151 ************************************ 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.151 ************************************ 00:14:43.151 START TEST nvmf_ns_masking 00:14:43.151 ************************************ 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:43.151 * Looking for test storage... 00:14:43.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:43.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.151 --rc genhtml_branch_coverage=1 00:14:43.151 --rc genhtml_function_coverage=1 00:14:43.151 --rc genhtml_legend=1 00:14:43.151 --rc geninfo_all_blocks=1 00:14:43.151 --rc geninfo_unexecuted_blocks=1 00:14:43.151 00:14:43.151 ' 00:14:43.151 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:43.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.151 --rc genhtml_branch_coverage=1 00:14:43.152 --rc genhtml_function_coverage=1 00:14:43.152 --rc genhtml_legend=1 00:14:43.152 --rc geninfo_all_blocks=1 00:14:43.152 --rc geninfo_unexecuted_blocks=1 00:14:43.152 00:14:43.152 ' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:43.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.152 --rc genhtml_branch_coverage=1 00:14:43.152 --rc genhtml_function_coverage=1 00:14:43.152 --rc genhtml_legend=1 00:14:43.152 --rc geninfo_all_blocks=1 00:14:43.152 --rc geninfo_unexecuted_blocks=1 00:14:43.152 00:14:43.152 ' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:43.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.152 --rc genhtml_branch_coverage=1 00:14:43.152 --rc genhtml_function_coverage=1 00:14:43.152 --rc genhtml_legend=1 00:14:43.152 --rc geninfo_all_blocks=1 00:14:43.152 --rc geninfo_unexecuted_blocks=1 00:14:43.152 00:14:43.152 ' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f602792c-ebb9-4d60-ac53-19d704275564 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a17ea9a3-4e59-453b-baaf-705368a472a6 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7d911e93-2e84-41c0-ae24-96c1cf795563 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:43.152 18:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.307 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:51.308 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:51.308 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:51.308 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:51.308 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.308 18:26:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:14:51.308 00:14:51.308 --- 10.0.0.2 ping statistics --- 00:14:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.308 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:14:51.308 00:14:51.308 --- 10.0.0.1 ping statistics --- 00:14:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.308 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:14:51.308 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2074806 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2074806 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2074806 ']' 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.309 18:26:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.309 [2024-12-06 18:26:45.293588] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:14:51.309 [2024-12-06 18:26:45.293665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.309 [2024-12-06 18:26:45.393264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.309 [2024-12-06 18:26:45.444086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.309 [2024-12-06 18:26:45.444136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.309 [2024-12-06 18:26:45.444145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.309 [2024-12-06 18:26:45.444152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.309 [2024-12-06 18:26:45.444159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.309 [2024-12-06 18:26:45.444922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.569 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:51.569 [2024-12-06 18:26:46.316867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.828 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:51.828 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:51.828 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:51.828 Malloc1 00:14:51.828 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.089 Malloc2 00:14:52.089 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.350 18:26:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:52.611 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.611 [2024-12-06 18:26:47.340264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.611 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:52.611 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7d911e93-2e84-41c0-ae24-96c1cf795563 -a 10.0.0.2 -s 4420 -i 4 00:14:52.871 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.871 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:52.871 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.871 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:52.871 18:26:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:54.781 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.041 [ 0]:0x1 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b3e9711bffea4bbdbb69a41245288ea6 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b3e9711bffea4bbdbb69a41245288ea6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.041 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.301 [ 0]:0x1 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b3e9711bffea4bbdbb69a41245288ea6 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b3e9711bffea4bbdbb69a41245288ea6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.301 [ 1]:0x2 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:55.301 18:26:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.301 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.561 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:55.821 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:55.821 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7d911e93-2e84-41c0-ae24-96c1cf795563 -a 10.0.0.2 -s 4420 -i 4 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:56.082 18:26:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:57.995 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.257 [ 0]:0x2 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.257 18:26:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.257 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:58.257 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.257 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.257 [ 0]:0x1 00:14:58.257 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.257 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b3e9711bffea4bbdbb69a41245288ea6 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b3e9711bffea4bbdbb69a41245288ea6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.518 [ 1]:0x2 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.518 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.779 [ 0]:0x2 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.779 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.040 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:59.040 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7d911e93-2e84-41c0-ae24-96c1cf795563 -a 10.0.0.2 -s 4420 -i 4 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:59.301 18:26:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.215 18:26:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.474 [ 0]:0x1 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b3e9711bffea4bbdbb69a41245288ea6 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b3e9711bffea4bbdbb69a41245288ea6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.474 [ 1]:0x2 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.474 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.734 [ 0]:0x2 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.734 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.735 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.735 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.735 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:01.735 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:01.995 [2024-12-06 18:26:56.657574] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:01.995 request: 00:15:01.995 { 00:15:01.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.995 "nsid": 2, 00:15:01.995 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.995 "method": "nvmf_ns_remove_host", 00:15:01.995 "req_id": 1 00:15:01.995 } 00:15:01.995 Got JSON-RPC error response 00:15:01.995 response: 00:15:01.995 { 00:15:01.995 "code": -32602, 00:15:01.995 "message": "Invalid parameters" 00:15:01.995 } 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.995 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.996 [ 0]:0x2 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.996 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.256 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b0e7709d049244f3a3fc0b6a6f5044bf 00:15:02.256 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b0e7709d049244f3a3fc0b6a6f5044bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.256 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:02.256 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2077302 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2077302 /var/tmp/host.sock 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2077302 ']' 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:02.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.257 18:26:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.257 [2024-12-06 18:26:56.921346] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:15:02.257 [2024-12-06 18:26:56.921398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077302 ] 00:15:02.257 [2024-12-06 18:26:57.009009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.518 [2024-12-06 18:26:57.044561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.089 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.089 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:03.089 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.350 18:26:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.350 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f602792c-ebb9-4d60-ac53-19d704275564 00:15:03.350 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.350 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F602792CEBB94D60AC5319D704275564 -i 00:15:03.630 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a17ea9a3-4e59-453b-baaf-705368a472a6 00:15:03.630 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.630 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A17EA9A34E59453BBAAF705368A472A6 -i 00:15:03.891 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:03.891 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:04.152 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.152 18:26:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.412 nvme0n1 00:15:04.412 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:04.412 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:04.672 nvme1n2 00:15:04.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:04.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:04.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:04.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:04.673 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:04.933 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:04.933 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:04.933 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:04.933 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f602792c-ebb9-4d60-ac53-19d704275564 == \f\6\0\2\7\9\2\c\-\e\b\b\9\-\4\d\6\0\-\a\c\5\3\-\1\9\d\7\0\4\2\7\5\5\6\4 ]] 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a17ea9a3-4e59-453b-baaf-705368a472a6 == \a\1\7\e\a\9\a\3\-\4\e\5\9\-\4\5\3\b\-\b\a\a\f\-\7\0\5\3\6\8\a\4\7\2\a\6 ]] 00:15:05.194 18:26:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.454 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f602792c-ebb9-4d60-ac53-19d704275564 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F602792CEBB94D60AC5319D704275564 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F602792CEBB94D60AC5319D704275564 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F602792CEBB94D60AC5319D704275564 00:15:05.715 [2024-12-06 18:27:00.435833] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:05.715 [2024-12-06 18:27:00.435862] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:05.715 [2024-12-06 18:27:00.435869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.715 request: 00:15:05.715 { 00:15:05.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.715 "namespace": { 00:15:05.715 "bdev_name": "invalid", 00:15:05.715 "nsid": 1, 00:15:05.715 "nguid": "F602792CEBB94D60AC5319D704275564", 00:15:05.715 "no_auto_visible": false, 00:15:05.715 "hide_metadata": false 00:15:05.715 }, 00:15:05.715 "method": "nvmf_subsystem_add_ns", 00:15:05.715 "req_id": 1 00:15:05.715 } 00:15:05.715 Got JSON-RPC error response 00:15:05.715 response: 00:15:05.715 { 00:15:05.715 "code": -32602, 00:15:05.715 "message": "Invalid parameters" 00:15:05.715 } 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f602792c-ebb9-4d60-ac53-19d704275564 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:05.715 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F602792CEBB94D60AC5319D704275564 -i 00:15:05.976 18:27:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:07.896 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:07.896 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:07.896 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2077302 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2077302 ']' 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2077302 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077302 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077302' 00:15:08.157 killing process with pid 2077302 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2077302 00:15:08.157 18:27:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2077302 00:15:08.417 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.678 rmmod nvme_tcp 00:15:08.678 rmmod nvme_fabrics 00:15:08.678 rmmod nvme_keyring 00:15:08.678 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2074806 ']' 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2074806 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2074806 ']' 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2074806 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2074806 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2074806' 00:15:08.679 killing process with pid 2074806 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2074806 00:15:08.679 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2074806 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.940 18:27:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.850 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:10.850 00:15:10.850 real 0m28.116s 00:15:10.850 user 0m32.086s 00:15:10.850 sys 0m8.127s 00:15:10.850 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.850 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:10.850 ************************************ 00:15:10.850 END TEST nvmf_ns_masking 00:15:10.850 ************************************ 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.111 ************************************ 00:15:11.111 START TEST nvmf_nvme_cli 00:15:11.111 ************************************ 00:15:11.111 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:11.111 * Looking for test storage... 00:15:11.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.112 --rc genhtml_branch_coverage=1 00:15:11.112 --rc genhtml_function_coverage=1 00:15:11.112 --rc genhtml_legend=1 00:15:11.112 --rc geninfo_all_blocks=1 00:15:11.112 --rc geninfo_unexecuted_blocks=1 00:15:11.112 00:15:11.112 ' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.112 --rc genhtml_branch_coverage=1 00:15:11.112 --rc genhtml_function_coverage=1 00:15:11.112 --rc genhtml_legend=1 00:15:11.112 --rc geninfo_all_blocks=1 00:15:11.112 --rc geninfo_unexecuted_blocks=1 00:15:11.112 00:15:11.112 ' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.112 --rc genhtml_branch_coverage=1 00:15:11.112 --rc genhtml_function_coverage=1 00:15:11.112 --rc genhtml_legend=1 00:15:11.112 --rc geninfo_all_blocks=1 00:15:11.112 --rc geninfo_unexecuted_blocks=1 00:15:11.112 00:15:11.112 ' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:11.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.112 --rc genhtml_branch_coverage=1 00:15:11.112 --rc genhtml_function_coverage=1 00:15:11.112 --rc genhtml_legend=1 00:15:11.112 --rc geninfo_all_blocks=1 00:15:11.112 --rc geninfo_unexecuted_blocks=1 00:15:11.112 00:15:11.112 ' 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.112 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:11.373 18:27:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.628 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:19.629 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:19.629 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:19.629 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:19.629 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:15:19.629 00:15:19.629 --- 10.0.0.2 ping statistics --- 00:15:19.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.629 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:15:19.629 00:15:19.629 --- 10.0.0.1 ping statistics --- 00:15:19.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.629 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:15:19.629 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2083284 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2083284 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2083284 ']' 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.630 18:27:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.630 [2024-12-06 18:27:13.533435] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:15:19.630 [2024-12-06 18:27:13.533509] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.630 [2024-12-06 18:27:13.633082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.630 [2024-12-06 18:27:13.687377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.630 [2024-12-06 18:27:13.687433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.630 [2024-12-06 18:27:13.687441] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.630 [2024-12-06 18:27:13.687449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.630 [2024-12-06 18:27:13.687456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.630 [2024-12-06 18:27:13.689920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.630 [2024-12-06 18:27:13.690081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.630 [2024-12-06 18:27:13.690244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.630 [2024-12-06 18:27:13.690245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.630 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.630 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:19.630 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.630 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.630 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 [2024-12-06 18:27:14.411816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 Malloc0 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 Malloc1 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 [2024-12-06 18:27:14.524695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.920 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:20.181 00:15:20.181 Discovery Log Number of Records 2, Generation counter 2 00:15:20.181 =====Discovery Log Entry 0====== 00:15:20.181 trtype: tcp 00:15:20.181 adrfam: ipv4 00:15:20.181 subtype: current discovery subsystem 00:15:20.181 treq: not required 00:15:20.181 portid: 0 00:15:20.181 trsvcid: 4420 00:15:20.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:20.181 traddr: 10.0.0.2 00:15:20.181 eflags: explicit discovery connections, duplicate discovery information 00:15:20.181 sectype: none 00:15:20.181 =====Discovery Log Entry 1====== 00:15:20.181 trtype: tcp 00:15:20.181 adrfam: ipv4 00:15:20.181 subtype: nvme subsystem 00:15:20.181 treq: not required 00:15:20.181 portid: 0 00:15:20.181 trsvcid: 4420 00:15:20.181 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:20.181 traddr: 10.0.0.2 00:15:20.181 eflags: none 00:15:20.181 sectype: none 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:20.181 18:27:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:21.577 18:27:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:24.127 /dev/nvme0n2 ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:24.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.127 rmmod nvme_tcp 00:15:24.127 rmmod nvme_fabrics 00:15:24.127 rmmod nvme_keyring 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2083284 ']' 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2083284 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2083284 ']' 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2083284 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083284 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:24.127 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083284' 00:15:24.127 killing process with pid 2083284 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2083284 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2083284 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.128 18:27:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.676 00:15:26.676 real 0m15.214s 00:15:26.676 user 0m22.673s 00:15:26.676 sys 0m6.448s 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.676 ************************************ 00:15:26.676 END TEST nvmf_nvme_cli 00:15:26.676 ************************************ 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.676 ************************************ 00:15:26.676 START TEST nvmf_vfio_user 00:15:26.676 ************************************ 00:15:26.676 18:27:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:26.676 * Looking for test storage... 00:15:26.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:26.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.676 --rc genhtml_branch_coverage=1 00:15:26.676 --rc genhtml_function_coverage=1 00:15:26.676 --rc genhtml_legend=1 00:15:26.676 --rc geninfo_all_blocks=1 00:15:26.676 --rc geninfo_unexecuted_blocks=1 00:15:26.676 00:15:26.676 ' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:26.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.676 --rc genhtml_branch_coverage=1 00:15:26.676 --rc genhtml_function_coverage=1 00:15:26.676 --rc genhtml_legend=1 00:15:26.676 --rc geninfo_all_blocks=1 00:15:26.676 --rc geninfo_unexecuted_blocks=1 00:15:26.676 00:15:26.676 ' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:26.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.676 --rc genhtml_branch_coverage=1 00:15:26.676 --rc genhtml_function_coverage=1 00:15:26.676 --rc genhtml_legend=1 00:15:26.676 --rc geninfo_all_blocks=1 00:15:26.676 --rc geninfo_unexecuted_blocks=1 00:15:26.676 00:15:26.676 ' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:26.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.676 --rc genhtml_branch_coverage=1 00:15:26.676 --rc genhtml_function_coverage=1 00:15:26.676 --rc genhtml_legend=1 00:15:26.676 --rc geninfo_all_blocks=1 00:15:26.676 --rc geninfo_unexecuted_blocks=1 00:15:26.676 00:15:26.676 ' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.676 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2085079 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2085079' 00:15:26.677 Process pid: 2085079 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2085079 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2085079 ']' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.677 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:26.677 [2024-12-06 18:27:21.281591] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:15:26.677 [2024-12-06 18:27:21.281668] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.677 [2024-12-06 18:27:21.335876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.677 [2024-12-06 18:27:21.368983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.677 [2024-12-06 18:27:21.369012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.677 [2024-12-06 18:27:21.369019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.677 [2024-12-06 18:27:21.369024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.677 [2024-12-06 18:27:21.369029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.677 [2024-12-06 18:27:21.370323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.677 [2024-12-06 18:27:21.370480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.677 [2024-12-06 18:27:21.370636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.677 [2024-12-06 18:27:21.370655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.938 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.938 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:26.938 18:27:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:27.878 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:27.878 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:27.878 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:28.139 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.139 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:28.139 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:28.139 Malloc1 00:15:28.139 18:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:28.399 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:28.661 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:28.661 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.661 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:28.661 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:28.921 Malloc2 00:15:28.921 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:29.181 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:29.181 18:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:29.441 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:29.441 [2024-12-06 18:27:24.156570] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:15:29.441 [2024-12-06 18:27:24.156616] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085516 ] 00:15:29.441 [2024-12-06 18:27:24.194950] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:29.441 [2024-12-06 18:27:24.203918] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.441 [2024-12-06 18:27:24.203936] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f137e5bf000 00:15:29.441 [2024-12-06 18:27:24.204917] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.205920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.206926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.207936] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.208935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.209941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.210943] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.211956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:29.441 [2024-12-06 18:27:24.212966] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:29.441 [2024-12-06 18:27:24.212973] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f137e5b4000 00:15:29.441 [2024-12-06 18:27:24.213885] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.441 [2024-12-06 18:27:24.223336] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:29.441 [2024-12-06 18:27:24.223355] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:29.702 [2024-12-06 18:27:24.228059] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:29.702 [2024-12-06 18:27:24.228092] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:29.702 [2024-12-06 18:27:24.228153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:29.702 [2024-12-06 18:27:24.228163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:29.702 [2024-12-06 18:27:24.228167] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:29.702 [2024-12-06 18:27:24.229058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:29.702 [2024-12-06 18:27:24.229065] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:29.702 [2024-12-06 18:27:24.229071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:29.702 [2024-12-06 18:27:24.230067] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:29.702 [2024-12-06 18:27:24.230075] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:29.702 [2024-12-06 18:27:24.230080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.231068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:29.702 [2024-12-06 18:27:24.231075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.232075] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:29.702 [2024-12-06 18:27:24.232081] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:29.702 [2024-12-06 18:27:24.232085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.232090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.232196] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:29.702 [2024-12-06 18:27:24.232200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.232205] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:29.702 [2024-12-06 18:27:24.233086] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:29.702 [2024-12-06 18:27:24.234087] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:29.702 [2024-12-06 18:27:24.235088] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:29.702 [2024-12-06 18:27:24.236090] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:29.702 [2024-12-06 18:27:24.236139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:29.702 [2024-12-06 18:27:24.237100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:29.702 [2024-12-06 18:27:24.237106] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:29.702 [2024-12-06 18:27:24.237110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:29.702 [2024-12-06 18:27:24.237130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237143] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.702 [2024-12-06 18:27:24.237146] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.702 [2024-12-06 18:27:24.237149] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.702 [2024-12-06 18:27:24.237161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.702 [2024-12-06 18:27:24.237201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:29.702 [2024-12-06 18:27:24.237208] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:29.702 [2024-12-06 18:27:24.237213] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:29.702 [2024-12-06 18:27:24.237216] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:29.702 [2024-12-06 18:27:24.237219] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:29.702 [2024-12-06 18:27:24.237223] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:29.702 [2024-12-06 18:27:24.237227] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:29.702 [2024-12-06 18:27:24.237230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:29.702 [2024-12-06 18:27:24.237256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:29.702 [2024-12-06 18:27:24.237263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.702 [2024-12-06 18:27:24.237269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.702 [2024-12-06 18:27:24.237275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.702 [2024-12-06 18:27:24.237281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:29.702 [2024-12-06 18:27:24.237284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:29.702 [2024-12-06 18:27:24.237297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:29.702 [2024-12-06 18:27:24.237307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237311] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:29.703 [2024-12-06 18:27:24.237314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237394] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:29.703 [2024-12-06 18:27:24.237397] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:29.703 [2024-12-06 18:27:24.237400] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237423] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:29.703 [2024-12-06 18:27:24.237434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237444] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.703 [2024-12-06 18:27:24.237447] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.703 [2024-12-06 18:27:24.237450] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:29.703 [2024-12-06 18:27:24.237494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.703 [2024-12-06 18:27:24.237496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237516] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237526] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237544] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:29.703 [2024-12-06 18:27:24.237547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:29.703 [2024-12-06 18:27:24.237551] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:29.703 [2024-12-06 18:27:24.237565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237584] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237602] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237641] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:29.703 [2024-12-06 18:27:24.237644] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:29.703 [2024-12-06 18:27:24.237647] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:29.703 [2024-12-06 18:27:24.237649] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:29.703 [2024-12-06 18:27:24.237652] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:29.703 [2024-12-06 18:27:24.237656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:29.703 [2024-12-06 18:27:24.237662] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:29.703 [2024-12-06 18:27:24.237666] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:29.703 [2024-12-06 18:27:24.237668] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237677] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:29.703 [2024-12-06 18:27:24.237680] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:29.703 [2024-12-06 18:27:24.237683] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237694] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:29.703 [2024-12-06 18:27:24.237697] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:29.703 [2024-12-06 18:27:24.237699] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:29.703 [2024-12-06 18:27:24.237704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:29.703 [2024-12-06 18:27:24.237709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:29.703 [2024-12-06 18:27:24.237730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:29.703 ===================================================== 00:15:29.703 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:29.703 ===================================================== 00:15:29.703 Controller Capabilities/Features 00:15:29.703 ================================ 00:15:29.703 Vendor ID: 4e58 00:15:29.703 Subsystem Vendor ID: 4e58 00:15:29.703 Serial Number: SPDK1 00:15:29.703 Model Number: SPDK bdev Controller 00:15:29.703 Firmware Version: 25.01 00:15:29.703 Recommended Arb Burst: 6 00:15:29.703 IEEE OUI Identifier: 8d 6b 50 00:15:29.703 Multi-path I/O 00:15:29.703 May have multiple subsystem ports: Yes 00:15:29.703 May have multiple controllers: Yes 00:15:29.703 Associated with SR-IOV VF: No 00:15:29.703 Max Data Transfer Size: 131072 00:15:29.703 Max Number of Namespaces: 32 00:15:29.703 Max Number of I/O Queues: 127 00:15:29.703 NVMe Specification Version (VS): 1.3 00:15:29.703 NVMe Specification Version (Identify): 1.3 00:15:29.703 Maximum Queue Entries: 256 00:15:29.703 Contiguous Queues Required: Yes 00:15:29.703 Arbitration Mechanisms Supported 00:15:29.703 Weighted Round Robin: Not Supported 00:15:29.703 Vendor Specific: Not Supported 00:15:29.703 Reset Timeout: 15000 ms 00:15:29.703 Doorbell Stride: 4 bytes 00:15:29.703 NVM Subsystem Reset: Not Supported 00:15:29.703 Command Sets Supported 00:15:29.703 NVM Command Set: Supported 00:15:29.703 Boot Partition: Not Supported 00:15:29.703 Memory Page Size Minimum: 4096 bytes 00:15:29.703 Memory Page Size Maximum: 4096 bytes 00:15:29.703 Persistent Memory Region: Not Supported 00:15:29.703 Optional Asynchronous Events Supported 00:15:29.703 Namespace Attribute Notices: Supported 00:15:29.704 Firmware Activation Notices: Not Supported 00:15:29.704 ANA Change Notices: Not Supported 00:15:29.704 PLE Aggregate Log Change Notices: Not Supported 00:15:29.704 LBA Status Info Alert Notices: Not Supported 00:15:29.704 EGE Aggregate Log Change Notices: Not Supported 00:15:29.704 Normal NVM Subsystem Shutdown event: Not Supported 00:15:29.704 Zone Descriptor Change Notices: Not Supported 00:15:29.704 Discovery Log Change Notices: Not Supported 00:15:29.704 Controller Attributes 00:15:29.704 128-bit Host Identifier: Supported 00:15:29.704 Non-Operational Permissive Mode: Not Supported 00:15:29.704 NVM Sets: Not Supported 00:15:29.704 Read Recovery Levels: Not Supported 00:15:29.704 Endurance Groups: Not Supported 00:15:29.704 Predictable Latency Mode: Not Supported 00:15:29.704 Traffic Based Keep ALive: Not Supported 00:15:29.704 Namespace Granularity: Not Supported 00:15:29.704 SQ Associations: Not Supported 00:15:29.704 UUID List: Not Supported 00:15:29.704 Multi-Domain Subsystem: Not Supported 00:15:29.704 Fixed Capacity Management: Not Supported 00:15:29.704 Variable Capacity Management: Not Supported 00:15:29.704 Delete Endurance Group: Not Supported 00:15:29.704 Delete NVM Set: Not Supported 00:15:29.704 Extended LBA Formats Supported: Not Supported 00:15:29.704 Flexible Data Placement Supported: Not Supported 00:15:29.704 00:15:29.704 Controller Memory Buffer Support 00:15:29.704 ================================ 00:15:29.704 Supported: No 00:15:29.704 00:15:29.704 Persistent Memory Region Support 00:15:29.704 ================================ 00:15:29.704 Supported: No 00:15:29.704 00:15:29.704 Admin Command Set Attributes 00:15:29.704 ============================ 00:15:29.704 Security Send/Receive: Not Supported 00:15:29.704 Format NVM: Not Supported 00:15:29.704 Firmware Activate/Download: Not Supported 00:15:29.704 Namespace Management: Not Supported 00:15:29.704 Device Self-Test: Not Supported 00:15:29.704 Directives: Not Supported 00:15:29.704 NVMe-MI: Not Supported 00:15:29.704 Virtualization Management: Not Supported 00:15:29.704 Doorbell Buffer Config: Not Supported 00:15:29.704 Get LBA Status Capability: Not Supported 00:15:29.704 Command & Feature Lockdown Capability: Not Supported 00:15:29.704 Abort Command Limit: 4 00:15:29.704 Async Event Request Limit: 4 00:15:29.704 Number of Firmware Slots: N/A 00:15:29.704 Firmware Slot 1 Read-Only: N/A 00:15:29.704 Firmware Activation Without Reset: N/A 00:15:29.704 Multiple Update Detection Support: N/A 00:15:29.704 Firmware Update Granularity: No Information Provided 00:15:29.704 Per-Namespace SMART Log: No 00:15:29.704 Asymmetric Namespace Access Log Page: Not Supported 00:15:29.704 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:29.704 Command Effects Log Page: Supported 00:15:29.704 Get Log Page Extended Data: Supported 00:15:29.704 Telemetry Log Pages: Not Supported 00:15:29.704 Persistent Event Log Pages: Not Supported 00:15:29.704 Supported Log Pages Log Page: May Support 00:15:29.704 Commands Supported & Effects Log Page: Not Supported 00:15:29.704 Feature Identifiers & Effects Log Page:May Support 00:15:29.704 NVMe-MI Commands & Effects Log Page: May Support 00:15:29.704 Data Area 4 for Telemetry Log: Not Supported 00:15:29.704 Error Log Page Entries Supported: 128 00:15:29.704 Keep Alive: Supported 00:15:29.704 Keep Alive Granularity: 10000 ms 00:15:29.704 00:15:29.704 NVM Command Set Attributes 00:15:29.704 ========================== 00:15:29.704 Submission Queue Entry Size 00:15:29.704 Max: 64 00:15:29.704 Min: 64 00:15:29.704 Completion Queue Entry Size 00:15:29.704 Max: 16 00:15:29.704 Min: 16 00:15:29.704 Number of Namespaces: 32 00:15:29.704 Compare Command: Supported 00:15:29.704 Write Uncorrectable Command: Not Supported 00:15:29.704 Dataset Management Command: Supported 00:15:29.704 Write Zeroes Command: Supported 00:15:29.704 Set Features Save Field: Not Supported 00:15:29.704 Reservations: Not Supported 00:15:29.704 Timestamp: Not Supported 00:15:29.704 Copy: Supported 00:15:29.704 Volatile Write Cache: Present 00:15:29.704 Atomic Write Unit (Normal): 1 00:15:29.704 Atomic Write Unit (PFail): 1 00:15:29.704 Atomic Compare & Write Unit: 1 00:15:29.704 Fused Compare & Write: Supported 00:15:29.704 Scatter-Gather List 00:15:29.704 SGL Command Set: Supported (Dword aligned) 00:15:29.704 SGL Keyed: Not Supported 00:15:29.704 SGL Bit Bucket Descriptor: Not Supported 00:15:29.704 SGL Metadata Pointer: Not Supported 00:15:29.704 Oversized SGL: Not Supported 00:15:29.704 SGL Metadata Address: Not Supported 00:15:29.704 SGL Offset: Not Supported 00:15:29.704 Transport SGL Data Block: Not Supported 00:15:29.704 Replay Protected Memory Block: Not Supported 00:15:29.704 00:15:29.704 Firmware Slot Information 00:15:29.704 ========================= 00:15:29.704 Active slot: 1 00:15:29.704 Slot 1 Firmware Revision: 25.01 00:15:29.704 00:15:29.704 00:15:29.704 Commands Supported and Effects 00:15:29.704 ============================== 00:15:29.704 Admin Commands 00:15:29.704 -------------- 00:15:29.704 Get Log Page (02h): Supported 00:15:29.704 Identify (06h): Supported 00:15:29.704 Abort (08h): Supported 00:15:29.704 Set Features (09h): Supported 00:15:29.704 Get Features (0Ah): Supported 00:15:29.704 Asynchronous Event Request (0Ch): Supported 00:15:29.704 Keep Alive (18h): Supported 00:15:29.704 I/O Commands 00:15:29.704 ------------ 00:15:29.704 Flush (00h): Supported LBA-Change 00:15:29.704 Write (01h): Supported LBA-Change 00:15:29.704 Read (02h): Supported 00:15:29.704 Compare (05h): Supported 00:15:29.704 Write Zeroes (08h): Supported LBA-Change 00:15:29.704 Dataset Management (09h): Supported LBA-Change 00:15:29.704 Copy (19h): Supported LBA-Change 00:15:29.704 00:15:29.704 Error Log 00:15:29.704 ========= 00:15:29.704 00:15:29.704 Arbitration 00:15:29.704 =========== 00:15:29.704 Arbitration Burst: 1 00:15:29.704 00:15:29.704 Power Management 00:15:29.704 ================ 00:15:29.704 Number of Power States: 1 00:15:29.704 Current Power State: Power State #0 00:15:29.704 Power State #0: 00:15:29.704 Max Power: 0.00 W 00:15:29.704 Non-Operational State: Operational 00:15:29.704 Entry Latency: Not Reported 00:15:29.704 Exit Latency: Not Reported 00:15:29.704 Relative Read Throughput: 0 00:15:29.704 Relative Read Latency: 0 00:15:29.704 Relative Write Throughput: 0 00:15:29.704 Relative Write Latency: 0 00:15:29.704 Idle Power: Not Reported 00:15:29.704 Active Power: Not Reported 00:15:29.704 Non-Operational Permissive Mode: Not Supported 00:15:29.704 00:15:29.704 Health Information 00:15:29.704 ================== 00:15:29.704 Critical Warnings: 00:15:29.704 Available Spare Space: OK 00:15:29.704 Temperature: OK 00:15:29.704 Device Reliability: OK 00:15:29.704 Read Only: No 00:15:29.704 Volatile Memory Backup: OK 00:15:29.704 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:29.704 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:29.704 Available Spare: 0% 00:15:29.704 Available Sp[2024-12-06 18:27:24.237801] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:29.704 [2024-12-06 18:27:24.237809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:29.704 [2024-12-06 18:27:24.237829] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:29.704 [2024-12-06 18:27:24.237836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.704 [2024-12-06 18:27:24.237841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.704 [2024-12-06 18:27:24.237845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.704 [2024-12-06 18:27:24.237849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.704 [2024-12-06 18:27:24.241644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:29.704 [2024-12-06 18:27:24.241653] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:29.704 [2024-12-06 18:27:24.242124] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:29.704 [2024-12-06 18:27:24.242163] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:29.704 [2024-12-06 18:27:24.242167] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:29.704 [2024-12-06 18:27:24.243128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:29.704 [2024-12-06 18:27:24.243137] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:29.704 [2024-12-06 18:27:24.243187] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:29.704 [2024-12-06 18:27:24.244158] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:29.705 are Threshold: 0% 00:15:29.705 Life Percentage Used: 0% 00:15:29.705 Data Units Read: 0 00:15:29.705 Data Units Written: 0 00:15:29.705 Host Read Commands: 0 00:15:29.705 Host Write Commands: 0 00:15:29.705 Controller Busy Time: 0 minutes 00:15:29.705 Power Cycles: 0 00:15:29.705 Power On Hours: 0 hours 00:15:29.705 Unsafe Shutdowns: 0 00:15:29.705 Unrecoverable Media Errors: 0 00:15:29.705 Lifetime Error Log Entries: 0 00:15:29.705 Warning Temperature Time: 0 minutes 00:15:29.705 Critical Temperature Time: 0 minutes 00:15:29.705 00:15:29.705 Number of Queues 00:15:29.705 ================ 00:15:29.705 Number of I/O Submission Queues: 127 00:15:29.705 Number of I/O Completion Queues: 127 00:15:29.705 00:15:29.705 Active Namespaces 00:15:29.705 ================= 00:15:29.705 Namespace ID:1 00:15:29.705 Error Recovery Timeout: Unlimited 00:15:29.705 Command Set Identifier: NVM (00h) 00:15:29.705 Deallocate: Supported 00:15:29.705 Deallocated/Unwritten Error: Not Supported 00:15:29.705 Deallocated Read Value: Unknown 00:15:29.705 Deallocate in Write Zeroes: Not Supported 00:15:29.705 Deallocated Guard Field: 0xFFFF 00:15:29.705 Flush: Supported 00:15:29.705 Reservation: Supported 00:15:29.705 Namespace Sharing Capabilities: Multiple Controllers 00:15:29.705 Size (in LBAs): 131072 (0GiB) 00:15:29.705 Capacity (in LBAs): 131072 (0GiB) 00:15:29.705 Utilization (in LBAs): 131072 (0GiB) 00:15:29.705 NGUID: 9C2A0434D10A4AC691578D5D3465286D 00:15:29.705 UUID: 9c2a0434-d10a-4ac6-9157-8d5d3465286d 00:15:29.705 Thin Provisioning: Not Supported 00:15:29.705 Per-NS Atomic Units: Yes 00:15:29.705 Atomic Boundary Size (Normal): 0 00:15:29.705 Atomic Boundary Size (PFail): 0 00:15:29.705 Atomic Boundary Offset: 0 00:15:29.705 Maximum Single Source Range Length: 65535 00:15:29.705 Maximum Copy Length: 65535 00:15:29.705 Maximum Source Range Count: 1 00:15:29.705 NGUID/EUI64 Never Reused: No 00:15:29.705 Namespace Write Protected: No 00:15:29.705 Number of LBA Formats: 1 00:15:29.705 Current LBA Format: LBA Format #00 00:15:29.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:29.705 00:15:29.705 18:27:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:29.705 [2024-12-06 18:27:24.429316] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:34.988 Initializing NVMe Controllers 00:15:34.988 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:34.988 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:34.988 Initialization complete. Launching workers. 00:15:34.988 ======================================================== 00:15:34.988 Latency(us) 00:15:34.988 Device Information : IOPS MiB/s Average min max 00:15:34.988 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39952.27 156.06 3203.70 868.60 6775.75 00:15:34.988 ======================================================== 00:15:34.988 Total : 39952.27 156.06 3203.70 868.60 6775.75 00:15:34.988 00:15:34.988 [2024-12-06 18:27:29.452362] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:34.988 18:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:34.988 [2024-12-06 18:27:29.642240] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:40.272 Initializing NVMe Controllers 00:15:40.272 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:40.272 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:40.272 Initialization complete. Launching workers. 00:15:40.272 ======================================================== 00:15:40.272 Latency(us) 00:15:40.272 Device Information : IOPS MiB/s Average min max 00:15:40.272 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16059.14 62.73 7976.08 5985.90 9976.43 00:15:40.272 ======================================================== 00:15:40.272 Total : 16059.14 62.73 7976.08 5985.90 9976.43 00:15:40.272 00:15:40.272 [2024-12-06 18:27:34.681343] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:40.272 18:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:40.272 [2024-12-06 18:27:34.883173] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:45.557 [2024-12-06 18:27:39.947842] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:45.557 Initializing NVMe Controllers 00:15:45.557 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.557 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:45.557 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:45.557 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:45.557 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:45.557 Initialization complete. Launching workers. 00:15:45.557 Starting thread on core 2 00:15:45.557 Starting thread on core 3 00:15:45.557 Starting thread on core 1 00:15:45.557 18:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:45.557 [2024-12-06 18:27:40.196812] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.855 [2024-12-06 18:27:43.254974] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.855 Initializing NVMe Controllers 00:15:48.855 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.855 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.855 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:48.855 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:48.855 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:48.855 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:48.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:48.855 Initialization complete. Launching workers. 00:15:48.855 Starting thread on core 1 with urgent priority queue 00:15:48.855 Starting thread on core 2 with urgent priority queue 00:15:48.855 Starting thread on core 3 with urgent priority queue 00:15:48.855 Starting thread on core 0 with urgent priority queue 00:15:48.855 SPDK bdev Controller (SPDK1 ) core 0: 9267.67 IO/s 10.79 secs/100000 ios 00:15:48.855 SPDK bdev Controller (SPDK1 ) core 1: 11658.67 IO/s 8.58 secs/100000 ios 00:15:48.855 SPDK bdev Controller (SPDK1 ) core 2: 9688.00 IO/s 10.32 secs/100000 ios 00:15:48.855 SPDK bdev Controller (SPDK1 ) core 3: 13106.67 IO/s 7.63 secs/100000 ios 00:15:48.855 ======================================================== 00:15:48.855 00:15:48.855 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:48.855 [2024-12-06 18:27:43.502122] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.855 Initializing NVMe Controllers 00:15:48.855 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.855 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.855 Namespace ID: 1 size: 0GB 00:15:48.855 Initialization complete. 00:15:48.855 INFO: using host memory buffer for IO 00:15:48.855 Hello world! 00:15:48.855 [2024-12-06 18:27:43.538326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.855 18:27:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:49.114 [2024-12-06 18:27:43.778031] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.053 Initializing NVMe Controllers 00:15:50.053 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.053 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.053 Initialization complete. Launching workers. 00:15:50.053 submit (in ns) avg, min, max = 4996.8, 2829.2, 3997435.0 00:15:50.053 complete (in ns) avg, min, max = 17872.9, 1636.7, 3997508.3 00:15:50.053 00:15:50.053 Submit histogram 00:15:50.053 ================ 00:15:50.053 Range in us Cumulative Count 00:15:50.053 2.827 - 2.840: 0.2983% ( 59) 00:15:50.053 2.840 - 2.853: 1.2944% ( 197) 00:15:50.053 2.853 - 2.867: 3.6507% ( 466) 00:15:50.053 2.867 - 2.880: 8.5453% ( 968) 00:15:50.053 2.880 - 2.893: 13.4095% ( 962) 00:15:50.053 2.893 - 2.907: 19.4165% ( 1188) 00:15:50.053 2.907 - 2.920: 25.8432% ( 1271) 00:15:50.053 2.920 - 2.933: 31.0158% ( 1023) 00:15:50.053 2.933 - 2.947: 36.2492% ( 1035) 00:15:50.053 2.947 - 2.960: 41.6140% ( 1061) 00:15:50.053 2.960 - 2.973: 47.1912% ( 1103) 00:15:50.053 2.973 - 2.987: 53.9162% ( 1330) 00:15:50.053 2.987 - 3.000: 62.7042% ( 1738) 00:15:50.053 3.000 - 3.013: 71.3961% ( 1719) 00:15:50.053 3.013 - 3.027: 79.5217% ( 1607) 00:15:50.053 3.027 - 3.040: 86.2568% ( 1332) 00:15:50.053 3.040 - 3.053: 91.7884% ( 1094) 00:15:50.053 3.053 - 3.067: 95.1762% ( 670) 00:15:50.053 3.067 - 3.080: 97.4162% ( 443) 00:15:50.053 3.080 - 3.093: 98.6398% ( 242) 00:15:50.053 3.093 - 3.107: 99.2112% ( 113) 00:15:50.053 3.107 - 3.120: 99.4489% ( 47) 00:15:50.053 3.120 - 3.133: 99.5399% ( 18) 00:15:50.053 3.133 - 3.147: 99.5904% ( 10) 00:15:50.053 3.147 - 3.160: 99.6107% ( 4) 00:15:50.053 3.160 - 3.173: 99.6157% ( 1) 00:15:50.053 3.173 - 3.187: 99.6208% ( 1) 00:15:50.053 3.187 - 3.200: 99.6258% ( 1) 00:15:50.053 3.200 - 3.213: 99.6309% ( 1) 00:15:50.053 3.267 - 3.280: 99.6410% ( 2) 00:15:50.053 3.333 - 3.347: 99.6461% ( 1) 00:15:50.053 3.440 - 3.467: 99.6511% ( 1) 00:15:50.053 3.467 - 3.493: 99.6562% ( 1) 00:15:50.053 3.520 - 3.547: 99.6612% ( 1) 00:15:50.053 3.547 - 3.573: 99.6663% ( 1) 00:15:50.053 3.653 - 3.680: 99.6764% ( 2) 00:15:50.053 3.760 - 3.787: 99.6814% ( 1) 00:15:50.053 3.867 - 3.893: 99.6865% ( 1) 00:15:50.053 4.133 - 4.160: 99.6916% ( 1) 00:15:50.053 4.213 - 4.240: 99.6966% ( 1) 00:15:50.053 4.507 - 4.533: 99.7017% ( 1) 00:15:50.053 4.587 - 4.613: 99.7067% ( 1) 00:15:50.053 4.613 - 4.640: 99.7118% ( 1) 00:15:50.053 4.640 - 4.667: 99.7168% ( 1) 00:15:50.053 4.693 - 4.720: 99.7219% ( 1) 00:15:50.053 4.747 - 4.773: 99.7270% ( 1) 00:15:50.053 4.907 - 4.933: 99.7371% ( 2) 00:15:50.053 4.933 - 4.960: 99.7421% ( 1) 00:15:50.053 4.960 - 4.987: 99.7472% ( 1) 00:15:50.053 4.987 - 5.013: 99.7522% ( 1) 00:15:50.053 5.013 - 5.040: 99.7674% ( 3) 00:15:50.053 5.040 - 5.067: 99.7826% ( 3) 00:15:50.053 5.067 - 5.093: 99.7876% ( 1) 00:15:50.053 5.120 - 5.147: 99.7927% ( 1) 00:15:50.053 5.147 - 5.173: 99.8079% ( 3) 00:15:50.053 5.227 - 5.253: 99.8180% ( 2) 00:15:50.053 5.333 - 5.360: 99.8281% ( 2) 00:15:50.053 5.360 - 5.387: 99.8331% ( 1) 00:15:50.053 5.440 - 5.467: 99.8382% ( 1) 00:15:50.053 5.467 - 5.493: 99.8433% ( 1) 00:15:50.053 5.520 - 5.547: 99.8483% ( 1) 00:15:50.053 5.547 - 5.573: 99.8534% ( 1) 00:15:50.053 5.573 - 5.600: 99.8584% ( 1) 00:15:50.053 5.653 - 5.680: 99.8635% ( 1) 00:15:50.053 5.680 - 5.707: 99.8736% ( 2) 00:15:50.053 5.707 - 5.733: 99.8786% ( 1) 00:15:50.053 5.733 - 5.760: 99.8837% ( 1) 00:15:50.053 5.840 - 5.867: 99.8989% ( 3) 00:15:50.053 6.053 - 6.080: 99.9039% ( 1) 00:15:50.053 6.213 - 6.240: 99.9140% ( 2) 00:15:50.053 6.347 - 6.373: 99.9191% ( 1) 00:15:50.053 6.400 - 6.427: 99.9242% ( 1) 00:15:50.053 6.480 - 6.507: 99.9292% ( 1) 00:15:50.053 6.507 - 6.533: 99.9343% ( 1) 00:15:50.053 6.773 - 6.800: 99.9393% ( 1) 00:15:50.053 7.360 - 7.413: 99.9444% ( 1) 00:15:50.053 9.120 - 9.173: 99.9494% ( 1) 00:15:50.053 3986.773 - 4014.080: 100.0000% ( 10) 00:15:50.053 00:15:50.053 Complete histogram 00:15:50.053 ================== 00:15:50.053 Ra[2024-12-06 18:27:44.796777] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.053 nge in us Cumulative Count 00:15:50.053 1.633 - 1.640: 0.0910% ( 18) 00:15:50.053 1.640 - 1.647: 0.7939% ( 139) 00:15:50.053 1.647 - 1.653: 0.8596% ( 13) 00:15:50.053 1.653 - 1.660: 0.9809% ( 24) 00:15:50.053 1.660 - 1.667: 1.0821% ( 20) 00:15:50.053 1.667 - 1.673: 1.0972% ( 3) 00:15:50.053 1.673 - 1.680: 1.5270% ( 85) 00:15:50.053 1.680 - 1.687: 32.6794% ( 6161) 00:15:50.053 1.687 - 1.693: 47.8182% ( 2994) 00:15:50.053 1.693 - 1.700: 51.3728% ( 703) 00:15:50.053 1.700 - 1.707: 71.6489% ( 4010) 00:15:50.053 1.707 - 1.720: 81.2358% ( 1896) 00:15:50.054 1.720 - 1.733: 84.0724% ( 561) 00:15:50.054 1.733 - 1.747: 85.9180% ( 365) 00:15:50.054 1.747 - 1.760: 90.6305% ( 932) 00:15:50.054 1.760 - 1.773: 95.6010% ( 983) 00:15:50.054 1.773 - 1.787: 98.4022% ( 554) 00:15:50.054 1.787 - 1.800: 99.2264% ( 163) 00:15:50.054 1.800 - 1.813: 99.3730% ( 29) 00:15:50.054 1.813 - 1.827: 99.3983% ( 5) 00:15:50.054 1.827 - 1.840: 99.4033% ( 1) 00:15:50.054 1.840 - 1.853: 99.4084% ( 1) 00:15:50.054 1.867 - 1.880: 99.4135% ( 1) 00:15:50.054 2.027 - 2.040: 99.4185% ( 1) 00:15:50.054 2.093 - 2.107: 99.4236% ( 1) 00:15:50.054 2.213 - 2.227: 99.4286% ( 1) 00:15:50.054 3.147 - 3.160: 99.4337% ( 1) 00:15:50.054 3.200 - 3.213: 99.4387% ( 1) 00:15:50.054 3.253 - 3.267: 99.4489% ( 2) 00:15:50.054 3.360 - 3.373: 99.4539% ( 1) 00:15:50.054 3.467 - 3.493: 99.4590% ( 1) 00:15:50.054 3.627 - 3.653: 99.4640% ( 1) 00:15:50.054 3.653 - 3.680: 99.4691% ( 1) 00:15:50.054 3.707 - 3.733: 99.4792% ( 2) 00:15:50.054 3.733 - 3.760: 99.4842% ( 1) 00:15:50.054 3.760 - 3.787: 99.4893% ( 1) 00:15:50.054 3.840 - 3.867: 99.4944% ( 1) 00:15:50.054 4.000 - 4.027: 99.4994% ( 1) 00:15:50.054 4.107 - 4.133: 99.5045% ( 1) 00:15:50.054 4.133 - 4.160: 99.5146% ( 2) 00:15:50.054 4.187 - 4.213: 99.5196% ( 1) 00:15:50.054 4.293 - 4.320: 99.5247% ( 1) 00:15:50.054 4.400 - 4.427: 99.5348% ( 2) 00:15:50.054 4.507 - 4.533: 99.5399% ( 1) 00:15:50.054 4.533 - 4.560: 99.5449% ( 1) 00:15:50.054 4.693 - 4.720: 99.5500% ( 1) 00:15:50.054 5.013 - 5.040: 99.5550% ( 1) 00:15:50.054 5.413 - 5.440: 99.5601% ( 1) 00:15:50.054 5.573 - 5.600: 99.5652% ( 1) 00:15:50.054 6.880 - 6.933: 99.5702% ( 1) 00:15:50.054 8.747 - 8.800: 99.5753% ( 1) 00:15:50.054 9.333 - 9.387: 99.5803% ( 1) 00:15:50.054 9.653 - 9.707: 99.5854% ( 1) 00:15:50.054 31.787 - 32.000: 99.5904% ( 1) 00:15:50.054 132.267 - 133.120: 99.5955% ( 1) 00:15:50.054 3986.773 - 4014.080: 100.0000% ( 80) 00:15:50.054 00:15:50.054 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:50.054 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:50.054 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:50.054 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:50.054 18:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.315 [ 00:15:50.315 { 00:15:50.315 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.315 "subtype": "Discovery", 00:15:50.315 "listen_addresses": [], 00:15:50.315 "allow_any_host": true, 00:15:50.315 "hosts": [] 00:15:50.315 }, 00:15:50.315 { 00:15:50.315 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.315 "subtype": "NVMe", 00:15:50.315 "listen_addresses": [ 00:15:50.315 { 00:15:50.315 "trtype": "VFIOUSER", 00:15:50.315 "adrfam": "IPv4", 00:15:50.315 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.315 "trsvcid": "0" 00:15:50.315 } 00:15:50.315 ], 00:15:50.315 "allow_any_host": true, 00:15:50.315 "hosts": [], 00:15:50.315 "serial_number": "SPDK1", 00:15:50.315 "model_number": "SPDK bdev Controller", 00:15:50.315 "max_namespaces": 32, 00:15:50.315 "min_cntlid": 1, 00:15:50.315 "max_cntlid": 65519, 00:15:50.315 "namespaces": [ 00:15:50.315 { 00:15:50.315 "nsid": 1, 00:15:50.315 "bdev_name": "Malloc1", 00:15:50.315 "name": "Malloc1", 00:15:50.315 "nguid": "9C2A0434D10A4AC691578D5D3465286D", 00:15:50.315 "uuid": "9c2a0434-d10a-4ac6-9157-8d5d3465286d" 00:15:50.315 } 00:15:50.315 ] 00:15:50.315 }, 00:15:50.315 { 00:15:50.315 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.315 "subtype": "NVMe", 00:15:50.315 "listen_addresses": [ 00:15:50.315 { 00:15:50.315 "trtype": "VFIOUSER", 00:15:50.315 "adrfam": "IPv4", 00:15:50.315 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.315 "trsvcid": "0" 00:15:50.315 } 00:15:50.315 ], 00:15:50.315 "allow_any_host": true, 00:15:50.315 "hosts": [], 00:15:50.315 "serial_number": "SPDK2", 00:15:50.315 "model_number": "SPDK bdev Controller", 00:15:50.315 "max_namespaces": 32, 00:15:50.315 "min_cntlid": 1, 00:15:50.315 "max_cntlid": 65519, 00:15:50.315 "namespaces": [ 00:15:50.315 { 00:15:50.315 "nsid": 1, 00:15:50.315 "bdev_name": "Malloc2", 00:15:50.315 "name": "Malloc2", 00:15:50.315 "nguid": "B293A31C34404335AE000B8D457C4CAD", 00:15:50.315 "uuid": "b293a31c-3440-4335-ae00-0b8d457c4cad" 00:15:50.315 } 00:15:50.315 ] 00:15:50.315 } 00:15:50.315 ] 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2089623 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:50.315 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:50.575 [2024-12-06 18:27:45.182077] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:50.575 Malloc3 00:15:50.575 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:50.835 [2024-12-06 18:27:45.378415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.835 Asynchronous Event Request test 00:15:50.835 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.835 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:50.835 Registering asynchronous event callbacks... 00:15:50.835 Starting namespace attribute notice tests for all controllers... 00:15:50.835 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:50.835 aer_cb - Changed Namespace 00:15:50.835 Cleaning up... 00:15:50.835 [ 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.835 "subtype": "Discovery", 00:15:50.835 "listen_addresses": [], 00:15:50.835 "allow_any_host": true, 00:15:50.835 "hosts": [] 00:15:50.835 }, 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.835 "subtype": "NVMe", 00:15:50.835 "listen_addresses": [ 00:15:50.835 { 00:15:50.835 "trtype": "VFIOUSER", 00:15:50.835 "adrfam": "IPv4", 00:15:50.835 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.835 "trsvcid": "0" 00:15:50.835 } 00:15:50.835 ], 00:15:50.835 "allow_any_host": true, 00:15:50.835 "hosts": [], 00:15:50.835 "serial_number": "SPDK1", 00:15:50.835 "model_number": "SPDK bdev Controller", 00:15:50.835 "max_namespaces": 32, 00:15:50.835 "min_cntlid": 1, 00:15:50.835 "max_cntlid": 65519, 00:15:50.835 "namespaces": [ 00:15:50.835 { 00:15:50.835 "nsid": 1, 00:15:50.835 "bdev_name": "Malloc1", 00:15:50.835 "name": "Malloc1", 00:15:50.835 "nguid": "9C2A0434D10A4AC691578D5D3465286D", 00:15:50.835 "uuid": "9c2a0434-d10a-4ac6-9157-8d5d3465286d" 00:15:50.835 }, 00:15:50.835 { 00:15:50.835 "nsid": 2, 00:15:50.835 "bdev_name": "Malloc3", 00:15:50.835 "name": "Malloc3", 00:15:50.835 "nguid": "C8153DA4C2C945CE9CB5108010EA1F5C", 00:15:50.835 "uuid": "c8153da4-c2c9-45ce-9cb5-108010ea1f5c" 00:15:50.835 } 00:15:50.835 ] 00:15:50.835 }, 00:15:50.835 { 00:15:50.835 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.835 "subtype": "NVMe", 00:15:50.835 "listen_addresses": [ 00:15:50.835 { 00:15:50.835 "trtype": "VFIOUSER", 00:15:50.835 "adrfam": "IPv4", 00:15:50.835 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.835 "trsvcid": "0" 00:15:50.835 } 00:15:50.835 ], 00:15:50.835 "allow_any_host": true, 00:15:50.835 "hosts": [], 00:15:50.835 "serial_number": "SPDK2", 00:15:50.835 "model_number": "SPDK bdev Controller", 00:15:50.835 "max_namespaces": 32, 00:15:50.835 "min_cntlid": 1, 00:15:50.835 "max_cntlid": 65519, 00:15:50.835 "namespaces": [ 00:15:50.835 { 00:15:50.835 "nsid": 1, 00:15:50.835 "bdev_name": "Malloc2", 00:15:50.835 "name": "Malloc2", 00:15:50.835 "nguid": "B293A31C34404335AE000B8D457C4CAD", 00:15:50.835 "uuid": "b293a31c-3440-4335-ae00-0b8d457c4cad" 00:15:50.835 } 00:15:50.835 ] 00:15:50.835 } 00:15:50.835 ] 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2089623 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:50.835 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:50.835 [2024-12-06 18:27:45.615305] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:15:50.835 [2024-12-06 18:27:45.615349] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089806 ] 00:15:51.096 [2024-12-06 18:27:45.652617] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:51.096 [2024-12-06 18:27:45.666847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:51.096 [2024-12-06 18:27:45.666866] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2a921df000 00:15:51.096 [2024-12-06 18:27:45.667848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.668854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.669866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.670872] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.671880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.672888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.673896] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.674899] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:51.096 [2024-12-06 18:27:45.675909] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:51.096 [2024-12-06 18:27:45.675917] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2a921d4000 00:15:51.096 [2024-12-06 18:27:45.676827] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:51.096 [2024-12-06 18:27:45.689913] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:51.096 [2024-12-06 18:27:45.689931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:51.096 [2024-12-06 18:27:45.691979] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:51.096 [2024-12-06 18:27:45.692009] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:51.096 [2024-12-06 18:27:45.692071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:51.096 [2024-12-06 18:27:45.692080] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:51.096 [2024-12-06 18:27:45.692083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:51.096 [2024-12-06 18:27:45.693641] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:51.096 [2024-12-06 18:27:45.693649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:51.096 [2024-12-06 18:27:45.693654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:51.096 [2024-12-06 18:27:45.693981] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:51.096 [2024-12-06 18:27:45.693988] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:51.096 [2024-12-06 18:27:45.693994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.694992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:51.096 [2024-12-06 18:27:45.694999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.695998] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:51.096 [2024-12-06 18:27:45.696005] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:51.096 [2024-12-06 18:27:45.696008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.696013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.696121] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:51.096 [2024-12-06 18:27:45.696125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.696128] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:51.096 [2024-12-06 18:27:45.697006] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:51.096 [2024-12-06 18:27:45.698015] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:51.096 [2024-12-06 18:27:45.699023] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:51.096 [2024-12-06 18:27:45.700022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:51.096 [2024-12-06 18:27:45.700055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:51.096 [2024-12-06 18:27:45.701027] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:51.096 [2024-12-06 18:27:45.701033] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:51.096 [2024-12-06 18:27:45.701037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:51.096 [2024-12-06 18:27:45.701051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:51.096 [2024-12-06 18:27:45.701057] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:51.096 [2024-12-06 18:27:45.701067] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.096 [2024-12-06 18:27:45.701071] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.096 [2024-12-06 18:27:45.701074] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.701082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.711642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.711651] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:51.097 [2024-12-06 18:27:45.711656] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:51.097 [2024-12-06 18:27:45.711659] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:51.097 [2024-12-06 18:27:45.711662] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:51.097 [2024-12-06 18:27:45.711666] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:51.097 [2024-12-06 18:27:45.711669] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:51.097 [2024-12-06 18:27:45.711673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.711680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.711688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.719653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.097 [2024-12-06 18:27:45.719659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.097 [2024-12-06 18:27:45.719665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.097 [2024-12-06 18:27:45.719671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:51.097 [2024-12-06 18:27:45.719674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.719681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.719687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.727642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.727649] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:51.097 [2024-12-06 18:27:45.727653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.727658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.727662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.727668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.735642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.735690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.735696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.735701] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:51.097 [2024-12-06 18:27:45.735704] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:51.097 [2024-12-06 18:27:45.735707] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.735712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.743641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.743649] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:51.097 [2024-12-06 18:27:45.743662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.743667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.743672] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.097 [2024-12-06 18:27:45.743676] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.097 [2024-12-06 18:27:45.743678] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.743682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.751641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.751652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.751657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.751662] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:51.097 [2024-12-06 18:27:45.751666] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.097 [2024-12-06 18:27:45.751668] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.751672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.759642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.759649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759676] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:51.097 [2024-12-06 18:27:45.759679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:51.097 [2024-12-06 18:27:45.759683] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:51.097 [2024-12-06 18:27:45.759696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.767642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.767653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.775642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.775652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.783642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.783651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.791641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:51.097 [2024-12-06 18:27:45.791653] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:51.097 [2024-12-06 18:27:45.791657] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:51.097 [2024-12-06 18:27:45.791659] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:51.097 [2024-12-06 18:27:45.791662] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:51.097 [2024-12-06 18:27:45.791664] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:51.097 [2024-12-06 18:27:45.791669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:51.097 [2024-12-06 18:27:45.791675] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:51.097 [2024-12-06 18:27:45.791678] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:51.097 [2024-12-06 18:27:45.791680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.791684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.791689] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:51.097 [2024-12-06 18:27:45.791692] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:51.097 [2024-12-06 18:27:45.791695] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.097 [2024-12-06 18:27:45.791699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:51.097 [2024-12-06 18:27:45.791704] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:51.097 [2024-12-06 18:27:45.791707] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:51.097 [2024-12-06 18:27:45.791710] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:51.098 [2024-12-06 18:27:45.791714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:51.098 [2024-12-06 18:27:45.799642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:51.098 [2024-12-06 18:27:45.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:51.098 [2024-12-06 18:27:45.799660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:51.098 [2024-12-06 18:27:45.799665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:51.098 ===================================================== 00:15:51.098 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:51.098 ===================================================== 00:15:51.098 Controller Capabilities/Features 00:15:51.098 ================================ 00:15:51.098 Vendor ID: 4e58 00:15:51.098 Subsystem Vendor ID: 4e58 00:15:51.098 Serial Number: SPDK2 00:15:51.098 Model Number: SPDK bdev Controller 00:15:51.098 Firmware Version: 25.01 00:15:51.098 Recommended Arb Burst: 6 00:15:51.098 IEEE OUI Identifier: 8d 6b 50 00:15:51.098 Multi-path I/O 00:15:51.098 May have multiple subsystem ports: Yes 00:15:51.098 May have multiple controllers: Yes 00:15:51.098 Associated with SR-IOV VF: No 00:15:51.098 Max Data Transfer Size: 131072 00:15:51.098 Max Number of Namespaces: 32 00:15:51.098 Max Number of I/O Queues: 127 00:15:51.098 NVMe Specification Version (VS): 1.3 00:15:51.098 NVMe Specification Version (Identify): 1.3 00:15:51.098 Maximum Queue Entries: 256 00:15:51.098 Contiguous Queues Required: Yes 00:15:51.098 Arbitration Mechanisms Supported 00:15:51.098 Weighted Round Robin: Not Supported 00:15:51.098 Vendor Specific: Not Supported 00:15:51.098 Reset Timeout: 15000 ms 00:15:51.098 Doorbell Stride: 4 bytes 00:15:51.098 NVM Subsystem Reset: Not Supported 00:15:51.098 Command Sets Supported 00:15:51.098 NVM Command Set: Supported 00:15:51.098 Boot Partition: Not Supported 00:15:51.098 Memory Page Size Minimum: 4096 bytes 00:15:51.098 Memory Page Size Maximum: 4096 bytes 00:15:51.098 Persistent Memory Region: Not Supported 00:15:51.098 Optional Asynchronous Events Supported 00:15:51.098 Namespace Attribute Notices: Supported 00:15:51.098 Firmware Activation Notices: Not Supported 00:15:51.098 ANA Change Notices: Not Supported 00:15:51.098 PLE Aggregate Log Change Notices: Not Supported 00:15:51.098 LBA Status Info Alert Notices: Not Supported 00:15:51.098 EGE Aggregate Log Change Notices: Not Supported 00:15:51.098 Normal NVM Subsystem Shutdown event: Not Supported 00:15:51.098 Zone Descriptor Change Notices: Not Supported 00:15:51.098 Discovery Log Change Notices: Not Supported 00:15:51.098 Controller Attributes 00:15:51.098 128-bit Host Identifier: Supported 00:15:51.098 Non-Operational Permissive Mode: Not Supported 00:15:51.098 NVM Sets: Not Supported 00:15:51.098 Read Recovery Levels: Not Supported 00:15:51.098 Endurance Groups: Not Supported 00:15:51.098 Predictable Latency Mode: Not Supported 00:15:51.098 Traffic Based Keep ALive: Not Supported 00:15:51.098 Namespace Granularity: Not Supported 00:15:51.098 SQ Associations: Not Supported 00:15:51.098 UUID List: Not Supported 00:15:51.098 Multi-Domain Subsystem: Not Supported 00:15:51.098 Fixed Capacity Management: Not Supported 00:15:51.098 Variable Capacity Management: Not Supported 00:15:51.098 Delete Endurance Group: Not Supported 00:15:51.098 Delete NVM Set: Not Supported 00:15:51.098 Extended LBA Formats Supported: Not Supported 00:15:51.098 Flexible Data Placement Supported: Not Supported 00:15:51.098 00:15:51.098 Controller Memory Buffer Support 00:15:51.098 ================================ 00:15:51.098 Supported: No 00:15:51.098 00:15:51.098 Persistent Memory Region Support 00:15:51.098 ================================ 00:15:51.098 Supported: No 00:15:51.098 00:15:51.098 Admin Command Set Attributes 00:15:51.098 ============================ 00:15:51.098 Security Send/Receive: Not Supported 00:15:51.098 Format NVM: Not Supported 00:15:51.098 Firmware Activate/Download: Not Supported 00:15:51.098 Namespace Management: Not Supported 00:15:51.098 Device Self-Test: Not Supported 00:15:51.098 Directives: Not Supported 00:15:51.098 NVMe-MI: Not Supported 00:15:51.098 Virtualization Management: Not Supported 00:15:51.098 Doorbell Buffer Config: Not Supported 00:15:51.098 Get LBA Status Capability: Not Supported 00:15:51.098 Command & Feature Lockdown Capability: Not Supported 00:15:51.098 Abort Command Limit: 4 00:15:51.098 Async Event Request Limit: 4 00:15:51.098 Number of Firmware Slots: N/A 00:15:51.098 Firmware Slot 1 Read-Only: N/A 00:15:51.098 Firmware Activation Without Reset: N/A 00:15:51.098 Multiple Update Detection Support: N/A 00:15:51.098 Firmware Update Granularity: No Information Provided 00:15:51.098 Per-Namespace SMART Log: No 00:15:51.098 Asymmetric Namespace Access Log Page: Not Supported 00:15:51.098 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:51.098 Command Effects Log Page: Supported 00:15:51.098 Get Log Page Extended Data: Supported 00:15:51.098 Telemetry Log Pages: Not Supported 00:15:51.098 Persistent Event Log Pages: Not Supported 00:15:51.098 Supported Log Pages Log Page: May Support 00:15:51.098 Commands Supported & Effects Log Page: Not Supported 00:15:51.098 Feature Identifiers & Effects Log Page:May Support 00:15:51.098 NVMe-MI Commands & Effects Log Page: May Support 00:15:51.098 Data Area 4 for Telemetry Log: Not Supported 00:15:51.098 Error Log Page Entries Supported: 128 00:15:51.098 Keep Alive: Supported 00:15:51.098 Keep Alive Granularity: 10000 ms 00:15:51.098 00:15:51.098 NVM Command Set Attributes 00:15:51.098 ========================== 00:15:51.098 Submission Queue Entry Size 00:15:51.098 Max: 64 00:15:51.098 Min: 64 00:15:51.098 Completion Queue Entry Size 00:15:51.098 Max: 16 00:15:51.098 Min: 16 00:15:51.098 Number of Namespaces: 32 00:15:51.098 Compare Command: Supported 00:15:51.098 Write Uncorrectable Command: Not Supported 00:15:51.098 Dataset Management Command: Supported 00:15:51.098 Write Zeroes Command: Supported 00:15:51.098 Set Features Save Field: Not Supported 00:15:51.098 Reservations: Not Supported 00:15:51.098 Timestamp: Not Supported 00:15:51.098 Copy: Supported 00:15:51.098 Volatile Write Cache: Present 00:15:51.098 Atomic Write Unit (Normal): 1 00:15:51.098 Atomic Write Unit (PFail): 1 00:15:51.098 Atomic Compare & Write Unit: 1 00:15:51.098 Fused Compare & Write: Supported 00:15:51.098 Scatter-Gather List 00:15:51.098 SGL Command Set: Supported (Dword aligned) 00:15:51.098 SGL Keyed: Not Supported 00:15:51.098 SGL Bit Bucket Descriptor: Not Supported 00:15:51.098 SGL Metadata Pointer: Not Supported 00:15:51.098 Oversized SGL: Not Supported 00:15:51.098 SGL Metadata Address: Not Supported 00:15:51.098 SGL Offset: Not Supported 00:15:51.098 Transport SGL Data Block: Not Supported 00:15:51.098 Replay Protected Memory Block: Not Supported 00:15:51.098 00:15:51.098 Firmware Slot Information 00:15:51.098 ========================= 00:15:51.098 Active slot: 1 00:15:51.098 Slot 1 Firmware Revision: 25.01 00:15:51.098 00:15:51.098 00:15:51.098 Commands Supported and Effects 00:15:51.098 ============================== 00:15:51.098 Admin Commands 00:15:51.098 -------------- 00:15:51.098 Get Log Page (02h): Supported 00:15:51.098 Identify (06h): Supported 00:15:51.098 Abort (08h): Supported 00:15:51.098 Set Features (09h): Supported 00:15:51.098 Get Features (0Ah): Supported 00:15:51.098 Asynchronous Event Request (0Ch): Supported 00:15:51.098 Keep Alive (18h): Supported 00:15:51.098 I/O Commands 00:15:51.098 ------------ 00:15:51.098 Flush (00h): Supported LBA-Change 00:15:51.098 Write (01h): Supported LBA-Change 00:15:51.098 Read (02h): Supported 00:15:51.098 Compare (05h): Supported 00:15:51.098 Write Zeroes (08h): Supported LBA-Change 00:15:51.098 Dataset Management (09h): Supported LBA-Change 00:15:51.098 Copy (19h): Supported LBA-Change 00:15:51.098 00:15:51.098 Error Log 00:15:51.098 ========= 00:15:51.098 00:15:51.098 Arbitration 00:15:51.098 =========== 00:15:51.098 Arbitration Burst: 1 00:15:51.098 00:15:51.098 Power Management 00:15:51.098 ================ 00:15:51.098 Number of Power States: 1 00:15:51.098 Current Power State: Power State #0 00:15:51.098 Power State #0: 00:15:51.098 Max Power: 0.00 W 00:15:51.098 Non-Operational State: Operational 00:15:51.098 Entry Latency: Not Reported 00:15:51.098 Exit Latency: Not Reported 00:15:51.098 Relative Read Throughput: 0 00:15:51.098 Relative Read Latency: 0 00:15:51.098 Relative Write Throughput: 0 00:15:51.099 Relative Write Latency: 0 00:15:51.099 Idle Power: Not Reported 00:15:51.099 Active Power: Not Reported 00:15:51.099 Non-Operational Permissive Mode: Not Supported 00:15:51.099 00:15:51.099 Health Information 00:15:51.099 ================== 00:15:51.099 Critical Warnings: 00:15:51.099 Available Spare Space: OK 00:15:51.099 Temperature: OK 00:15:51.099 Device Reliability: OK 00:15:51.099 Read Only: No 00:15:51.099 Volatile Memory Backup: OK 00:15:51.099 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:51.099 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:51.099 Available Spare: 0% 00:15:51.099 Available Sp[2024-12-06 18:27:45.799736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:51.099 [2024-12-06 18:27:45.807641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:51.099 [2024-12-06 18:27:45.807664] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:51.099 [2024-12-06 18:27:45.807671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.099 [2024-12-06 18:27:45.807676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.099 [2024-12-06 18:27:45.807680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.099 [2024-12-06 18:27:45.807685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:51.099 [2024-12-06 18:27:45.807719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:51.099 [2024-12-06 18:27:45.807727] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:51.099 [2024-12-06 18:27:45.808726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:51.099 [2024-12-06 18:27:45.808761] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:51.099 [2024-12-06 18:27:45.808766] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:51.099 [2024-12-06 18:27:45.809737] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:51.099 [2024-12-06 18:27:45.809745] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:51.099 [2024-12-06 18:27:45.809789] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:51.099 [2024-12-06 18:27:45.810758] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:51.099 are Threshold: 0% 00:15:51.099 Life Percentage Used: 0% 00:15:51.099 Data Units Read: 0 00:15:51.099 Data Units Written: 0 00:15:51.099 Host Read Commands: 0 00:15:51.099 Host Write Commands: 0 00:15:51.099 Controller Busy Time: 0 minutes 00:15:51.099 Power Cycles: 0 00:15:51.099 Power On Hours: 0 hours 00:15:51.099 Unsafe Shutdowns: 0 00:15:51.099 Unrecoverable Media Errors: 0 00:15:51.099 Lifetime Error Log Entries: 0 00:15:51.099 Warning Temperature Time: 0 minutes 00:15:51.099 Critical Temperature Time: 0 minutes 00:15:51.099 00:15:51.099 Number of Queues 00:15:51.099 ================ 00:15:51.099 Number of I/O Submission Queues: 127 00:15:51.099 Number of I/O Completion Queues: 127 00:15:51.099 00:15:51.099 Active Namespaces 00:15:51.099 ================= 00:15:51.099 Namespace ID:1 00:15:51.099 Error Recovery Timeout: Unlimited 00:15:51.099 Command Set Identifier: NVM (00h) 00:15:51.099 Deallocate: Supported 00:15:51.099 Deallocated/Unwritten Error: Not Supported 00:15:51.099 Deallocated Read Value: Unknown 00:15:51.099 Deallocate in Write Zeroes: Not Supported 00:15:51.099 Deallocated Guard Field: 0xFFFF 00:15:51.099 Flush: Supported 00:15:51.099 Reservation: Supported 00:15:51.099 Namespace Sharing Capabilities: Multiple Controllers 00:15:51.099 Size (in LBAs): 131072 (0GiB) 00:15:51.099 Capacity (in LBAs): 131072 (0GiB) 00:15:51.099 Utilization (in LBAs): 131072 (0GiB) 00:15:51.099 NGUID: B293A31C34404335AE000B8D457C4CAD 00:15:51.099 UUID: b293a31c-3440-4335-ae00-0b8d457c4cad 00:15:51.099 Thin Provisioning: Not Supported 00:15:51.099 Per-NS Atomic Units: Yes 00:15:51.099 Atomic Boundary Size (Normal): 0 00:15:51.099 Atomic Boundary Size (PFail): 0 00:15:51.099 Atomic Boundary Offset: 0 00:15:51.099 Maximum Single Source Range Length: 65535 00:15:51.099 Maximum Copy Length: 65535 00:15:51.099 Maximum Source Range Count: 1 00:15:51.099 NGUID/EUI64 Never Reused: No 00:15:51.099 Namespace Write Protected: No 00:15:51.099 Number of LBA Formats: 1 00:15:51.099 Current LBA Format: LBA Format #00 00:15:51.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:51.099 00:15:51.099 18:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:51.358 [2024-12-06 18:27:46.000047] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:56.640 Initializing NVMe Controllers 00:15:56.640 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:56.640 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:56.640 Initialization complete. Launching workers. 00:15:56.640 ======================================================== 00:15:56.640 Latency(us) 00:15:56.640 Device Information : IOPS MiB/s Average min max 00:15:56.640 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40043.40 156.42 3198.91 864.39 9702.26 00:15:56.640 ======================================================== 00:15:56.640 Total : 40043.40 156.42 3198.91 864.39 9702.26 00:15:56.640 00:15:56.641 [2024-12-06 18:27:51.105838] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:56.641 18:27:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:56.641 [2024-12-06 18:27:51.300454] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:01.925 Initializing NVMe Controllers 00:16:01.925 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:01.925 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:01.925 Initialization complete. Launching workers. 00:16:01.925 ======================================================== 00:16:01.925 Latency(us) 00:16:01.925 Device Information : IOPS MiB/s Average min max 00:16:01.925 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39986.81 156.20 3200.74 858.42 8749.70 00:16:01.925 ======================================================== 00:16:01.925 Total : 39986.81 156.20 3200.74 858.42 8749.70 00:16:01.925 00:16:01.925 [2024-12-06 18:27:56.318692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:01.925 18:27:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:01.925 [2024-12-06 18:27:56.522906] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:07.229 [2024-12-06 18:28:01.664726] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:07.229 Initializing NVMe Controllers 00:16:07.229 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.229 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:07.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:07.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:07.229 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:07.229 Initialization complete. Launching workers. 00:16:07.229 Starting thread on core 2 00:16:07.229 Starting thread on core 3 00:16:07.229 Starting thread on core 1 00:16:07.229 18:28:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:07.229 [2024-12-06 18:28:01.916067] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.525 [2024-12-06 18:28:04.969880] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.525 Initializing NVMe Controllers 00:16:10.525 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.525 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:10.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:10.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:10.525 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:10.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:10.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:10.525 Initialization complete. Launching workers. 00:16:10.525 Starting thread on core 1 with urgent priority queue 00:16:10.525 Starting thread on core 2 with urgent priority queue 00:16:10.525 Starting thread on core 3 with urgent priority queue 00:16:10.525 Starting thread on core 0 with urgent priority queue 00:16:10.525 SPDK bdev Controller (SPDK2 ) core 0: 7041.00 IO/s 14.20 secs/100000 ios 00:16:10.525 SPDK bdev Controller (SPDK2 ) core 1: 4969.33 IO/s 20.12 secs/100000 ios 00:16:10.525 SPDK bdev Controller (SPDK2 ) core 2: 4137.67 IO/s 24.17 secs/100000 ios 00:16:10.525 SPDK bdev Controller (SPDK2 ) core 3: 8201.00 IO/s 12.19 secs/100000 ios 00:16:10.525 ======================================================== 00:16:10.525 00:16:10.525 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:10.525 [2024-12-06 18:28:05.209414] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.525 Initializing NVMe Controllers 00:16:10.525 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.525 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.525 Namespace ID: 1 size: 0GB 00:16:10.525 Initialization complete. 00:16:10.525 INFO: using host memory buffer for IO 00:16:10.525 Hello world! 00:16:10.525 [2024-12-06 18:28:05.221496] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.525 18:28:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:10.784 [2024-12-06 18:28:05.459457] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.168 Initializing NVMe Controllers 00:16:12.168 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.168 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.168 Initialization complete. Launching workers. 00:16:12.168 submit (in ns) avg, min, max = 6577.4, 2820.0, 4000463.3 00:16:12.168 complete (in ns) avg, min, max = 17156.3, 1644.2, 4002791.7 00:16:12.168 00:16:12.168 Submit histogram 00:16:12.168 ================ 00:16:12.168 Range in us Cumulative Count 00:16:12.168 2.813 - 2.827: 0.0501% ( 10) 00:16:12.168 2.827 - 2.840: 0.9377% ( 177) 00:16:12.168 2.840 - 2.853: 3.1841% ( 448) 00:16:12.168 2.853 - 2.867: 7.1554% ( 792) 00:16:12.168 2.867 - 2.880: 11.5981% ( 886) 00:16:12.168 2.880 - 2.893: 16.6525% ( 1008) 00:16:12.168 2.893 - 2.907: 21.9476% ( 1056) 00:16:12.168 2.907 - 2.920: 27.0270% ( 1013) 00:16:12.168 2.920 - 2.933: 32.8887% ( 1169) 00:16:12.168 2.933 - 2.947: 37.7877% ( 977) 00:16:12.168 2.947 - 2.960: 43.0276% ( 1045) 00:16:12.168 2.960 - 2.973: 49.5362% ( 1298) 00:16:12.168 2.973 - 2.987: 57.5891% ( 1606) 00:16:12.168 2.987 - 3.000: 66.6600% ( 1809) 00:16:12.168 3.000 - 3.013: 75.6907% ( 1801) 00:16:12.168 3.013 - 3.027: 82.7759% ( 1413) 00:16:12.168 3.027 - 3.040: 88.4822% ( 1138) 00:16:12.168 3.040 - 3.053: 93.0903% ( 919) 00:16:12.168 3.053 - 3.067: 96.3897% ( 658) 00:16:12.168 3.067 - 3.080: 98.0494% ( 331) 00:16:12.168 3.080 - 3.093: 99.0122% ( 192) 00:16:12.168 3.093 - 3.107: 99.3732% ( 72) 00:16:12.168 3.107 - 3.120: 99.4685% ( 19) 00:16:12.168 3.120 - 3.133: 99.5086% ( 8) 00:16:12.168 3.133 - 3.147: 99.5387% ( 6) 00:16:12.168 3.147 - 3.160: 99.5437% ( 1) 00:16:12.168 3.173 - 3.187: 99.5487% ( 1) 00:16:12.168 3.200 - 3.213: 99.5587% ( 2) 00:16:12.168 3.227 - 3.240: 99.5638% ( 1) 00:16:12.168 3.240 - 3.253: 99.5688% ( 1) 00:16:12.168 3.280 - 3.293: 99.5738% ( 1) 00:16:12.168 3.440 - 3.467: 99.5788% ( 1) 00:16:12.168 3.493 - 3.520: 99.5838% ( 1) 00:16:12.168 3.600 - 3.627: 99.5888% ( 1) 00:16:12.168 3.653 - 3.680: 99.5938% ( 1) 00:16:12.168 3.813 - 3.840: 99.5989% ( 1) 00:16:12.168 3.973 - 4.000: 99.6039% ( 1) 00:16:12.168 4.160 - 4.187: 99.6089% ( 1) 00:16:12.168 4.320 - 4.347: 99.6139% ( 1) 00:16:12.169 4.347 - 4.373: 99.6189% ( 1) 00:16:12.169 4.480 - 4.507: 99.6239% ( 1) 00:16:12.169 4.507 - 4.533: 99.6390% ( 3) 00:16:12.169 4.533 - 4.560: 99.6540% ( 3) 00:16:12.169 4.587 - 4.613: 99.6640% ( 2) 00:16:12.169 4.613 - 4.640: 99.6691% ( 1) 00:16:12.169 4.640 - 4.667: 99.6741% ( 1) 00:16:12.169 4.667 - 4.693: 99.6791% ( 1) 00:16:12.169 4.720 - 4.747: 99.6841% ( 1) 00:16:12.169 4.747 - 4.773: 99.6941% ( 2) 00:16:12.169 4.773 - 4.800: 99.7092% ( 3) 00:16:12.169 4.853 - 4.880: 99.7142% ( 1) 00:16:12.169 4.880 - 4.907: 99.7192% ( 1) 00:16:12.169 4.907 - 4.933: 99.7292% ( 2) 00:16:12.169 4.933 - 4.960: 99.7342% ( 1) 00:16:12.169 4.960 - 4.987: 99.7393% ( 1) 00:16:12.169 4.987 - 5.013: 99.7493% ( 2) 00:16:12.169 5.040 - 5.067: 99.7543% ( 1) 00:16:12.169 5.173 - 5.200: 99.7593% ( 1) 00:16:12.169 5.200 - 5.227: 99.7693% ( 2) 00:16:12.169 5.253 - 5.280: 99.7744% ( 1) 00:16:12.169 5.307 - 5.333: 99.7794% ( 1) 00:16:12.169 5.333 - 5.360: 99.7844% ( 1) 00:16:12.169 5.360 - 5.387: 99.7994% ( 3) 00:16:12.169 5.440 - 5.467: 99.8044% ( 1) 00:16:12.169 5.493 - 5.520: 99.8095% ( 1) 00:16:12.169 5.547 - 5.573: 99.8145% ( 1) 00:16:12.169 5.573 - 5.600: 99.8245% ( 2) 00:16:12.169 5.600 - 5.627: 99.8345% ( 2) 00:16:12.169 5.707 - 5.733: 99.8395% ( 1) 00:16:12.169 5.840 - 5.867: 99.8446% ( 1) 00:16:12.169 5.893 - 5.920: 99.8496% ( 1) 00:16:12.169 5.973 - 6.000: 99.8546% ( 1) 00:16:12.169 6.107 - 6.133: 99.8596% ( 1) 00:16:12.169 6.160 - 6.187: 99.8646% ( 1) 00:16:12.169 6.240 - 6.267: 99.8746% ( 2) 00:16:12.169 6.293 - 6.320: 99.8797% ( 1) 00:16:12.169 6.480 - 6.507: 99.8847% ( 1) 00:16:12.169 6.613 - 6.640: 99.8897% ( 1) 00:16:12.169 6.640 - 6.667: 99.8947% ( 1) 00:16:12.169 [2024-12-06 18:28:06.551140] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.169 6.693 - 6.720: 99.8997% ( 1) 00:16:12.169 6.880 - 6.933: 99.9047% ( 1) 00:16:12.169 8.747 - 8.800: 99.9097% ( 1) 00:16:12.169 3986.773 - 4014.080: 100.0000% ( 18) 00:16:12.169 00:16:12.169 Complete histogram 00:16:12.169 ================== 00:16:12.169 Range in us Cumulative Count 00:16:12.169 1.640 - 1.647: 0.0050% ( 1) 00:16:12.169 1.647 - 1.653: 0.0100% ( 1) 00:16:12.169 1.653 - 1.660: 0.1705% ( 32) 00:16:12.169 1.660 - 1.667: 0.7521% ( 116) 00:16:12.169 1.667 - 1.673: 0.8173% ( 13) 00:16:12.169 1.673 - 1.680: 0.8875% ( 14) 00:16:12.169 1.680 - 1.687: 0.9627% ( 15) 00:16:12.169 1.687 - 1.693: 0.9728% ( 2) 00:16:12.169 1.693 - 1.700: 0.9828% ( 2) 00:16:12.169 1.700 - 1.707: 35.7469% ( 6933) 00:16:12.169 1.707 - 1.720: 50.4989% ( 2942) 00:16:12.169 1.720 - 1.733: 77.5761% ( 5400) 00:16:12.169 1.733 - 1.747: 84.0345% ( 1288) 00:16:12.169 1.747 - 1.760: 84.9772% ( 188) 00:16:12.169 1.760 - 1.773: 88.1713% ( 637) 00:16:12.169 1.773 - 1.787: 93.2357% ( 1010) 00:16:12.169 1.787 - 1.800: 97.0165% ( 754) 00:16:12.169 1.800 - 1.813: 98.8969% ( 375) 00:16:12.169 1.813 - 1.827: 99.3481% ( 90) 00:16:12.169 1.827 - 1.840: 99.4434% ( 19) 00:16:12.169 1.840 - 1.853: 99.4484% ( 1) 00:16:12.169 2.040 - 2.053: 99.4585% ( 2) 00:16:12.169 3.253 - 3.267: 99.4735% ( 3) 00:16:12.169 3.440 - 3.467: 99.4785% ( 1) 00:16:12.169 3.467 - 3.493: 99.4885% ( 2) 00:16:12.169 3.493 - 3.520: 99.4936% ( 1) 00:16:12.169 3.520 - 3.547: 99.4986% ( 1) 00:16:12.169 3.600 - 3.627: 99.5036% ( 1) 00:16:12.169 3.627 - 3.653: 99.5086% ( 1) 00:16:12.169 3.653 - 3.680: 99.5136% ( 1) 00:16:12.169 3.760 - 3.787: 99.5236% ( 2) 00:16:12.169 3.920 - 3.947: 99.5287% ( 1) 00:16:12.169 3.973 - 4.000: 99.5337% ( 1) 00:16:12.169 4.027 - 4.053: 99.5437% ( 2) 00:16:12.169 4.080 - 4.107: 99.5487% ( 1) 00:16:12.169 4.160 - 4.187: 99.5537% ( 1) 00:16:12.169 4.293 - 4.320: 99.5587% ( 1) 00:16:12.169 4.347 - 4.373: 99.5638% ( 1) 00:16:12.169 4.400 - 4.427: 99.5688% ( 1) 00:16:12.169 4.427 - 4.453: 99.5738% ( 1) 00:16:12.169 4.453 - 4.480: 99.5788% ( 1) 00:16:12.169 4.533 - 4.560: 99.5838% ( 1) 00:16:12.169 4.827 - 4.853: 99.5888% ( 1) 00:16:12.169 4.880 - 4.907: 99.5938% ( 1) 00:16:12.169 5.013 - 5.040: 99.5989% ( 1) 00:16:12.169 5.120 - 5.147: 99.6039% ( 1) 00:16:12.169 5.147 - 5.173: 99.6089% ( 1) 00:16:12.169 116.053 - 116.907: 99.6139% ( 1) 00:16:12.169 3986.773 - 4014.080: 100.0000% ( 77) 00:16:12.169 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.169 [ 00:16:12.169 { 00:16:12.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.169 "subtype": "Discovery", 00:16:12.169 "listen_addresses": [], 00:16:12.169 "allow_any_host": true, 00:16:12.169 "hosts": [] 00:16:12.169 }, 00:16:12.169 { 00:16:12.169 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.169 "subtype": "NVMe", 00:16:12.169 "listen_addresses": [ 00:16:12.169 { 00:16:12.169 "trtype": "VFIOUSER", 00:16:12.169 "adrfam": "IPv4", 00:16:12.169 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.169 "trsvcid": "0" 00:16:12.169 } 00:16:12.169 ], 00:16:12.169 "allow_any_host": true, 00:16:12.169 "hosts": [], 00:16:12.169 "serial_number": "SPDK1", 00:16:12.169 "model_number": "SPDK bdev Controller", 00:16:12.169 "max_namespaces": 32, 00:16:12.169 "min_cntlid": 1, 00:16:12.169 "max_cntlid": 65519, 00:16:12.169 "namespaces": [ 00:16:12.169 { 00:16:12.169 "nsid": 1, 00:16:12.169 "bdev_name": "Malloc1", 00:16:12.169 "name": "Malloc1", 00:16:12.169 "nguid": "9C2A0434D10A4AC691578D5D3465286D", 00:16:12.169 "uuid": "9c2a0434-d10a-4ac6-9157-8d5d3465286d" 00:16:12.169 }, 00:16:12.169 { 00:16:12.169 "nsid": 2, 00:16:12.169 "bdev_name": "Malloc3", 00:16:12.169 "name": "Malloc3", 00:16:12.169 "nguid": "C8153DA4C2C945CE9CB5108010EA1F5C", 00:16:12.169 "uuid": "c8153da4-c2c9-45ce-9cb5-108010ea1f5c" 00:16:12.169 } 00:16:12.169 ] 00:16:12.169 }, 00:16:12.169 { 00:16:12.169 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.169 "subtype": "NVMe", 00:16:12.169 "listen_addresses": [ 00:16:12.169 { 00:16:12.169 "trtype": "VFIOUSER", 00:16:12.169 "adrfam": "IPv4", 00:16:12.169 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.169 "trsvcid": "0" 00:16:12.169 } 00:16:12.169 ], 00:16:12.169 "allow_any_host": true, 00:16:12.169 "hosts": [], 00:16:12.169 "serial_number": "SPDK2", 00:16:12.169 "model_number": "SPDK bdev Controller", 00:16:12.169 "max_namespaces": 32, 00:16:12.169 "min_cntlid": 1, 00:16:12.169 "max_cntlid": 65519, 00:16:12.169 "namespaces": [ 00:16:12.169 { 00:16:12.169 "nsid": 1, 00:16:12.169 "bdev_name": "Malloc2", 00:16:12.169 "name": "Malloc2", 00:16:12.169 "nguid": "B293A31C34404335AE000B8D457C4CAD", 00:16:12.169 "uuid": "b293a31c-3440-4335-ae00-0b8d457c4cad" 00:16:12.169 } 00:16:12.169 ] 00:16:12.169 } 00:16:12.169 ] 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2093833 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:12.169 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:12.169 [2024-12-06 18:28:06.929997] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:12.169 Malloc4 00:16:12.431 18:28:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:12.431 [2024-12-06 18:28:07.123398] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:12.431 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:12.431 Asynchronous Event Request test 00:16:12.431 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.431 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:12.431 Registering asynchronous event callbacks... 00:16:12.431 Starting namespace attribute notice tests for all controllers... 00:16:12.431 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:12.431 aer_cb - Changed Namespace 00:16:12.431 Cleaning up... 00:16:12.693 [ 00:16:12.693 { 00:16:12.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:12.693 "subtype": "Discovery", 00:16:12.693 "listen_addresses": [], 00:16:12.693 "allow_any_host": true, 00:16:12.693 "hosts": [] 00:16:12.693 }, 00:16:12.693 { 00:16:12.693 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:12.693 "subtype": "NVMe", 00:16:12.693 "listen_addresses": [ 00:16:12.693 { 00:16:12.693 "trtype": "VFIOUSER", 00:16:12.693 "adrfam": "IPv4", 00:16:12.693 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:12.693 "trsvcid": "0" 00:16:12.693 } 00:16:12.693 ], 00:16:12.693 "allow_any_host": true, 00:16:12.693 "hosts": [], 00:16:12.693 "serial_number": "SPDK1", 00:16:12.693 "model_number": "SPDK bdev Controller", 00:16:12.693 "max_namespaces": 32, 00:16:12.693 "min_cntlid": 1, 00:16:12.693 "max_cntlid": 65519, 00:16:12.693 "namespaces": [ 00:16:12.693 { 00:16:12.693 "nsid": 1, 00:16:12.693 "bdev_name": "Malloc1", 00:16:12.693 "name": "Malloc1", 00:16:12.693 "nguid": "9C2A0434D10A4AC691578D5D3465286D", 00:16:12.693 "uuid": "9c2a0434-d10a-4ac6-9157-8d5d3465286d" 00:16:12.693 }, 00:16:12.693 { 00:16:12.693 "nsid": 2, 00:16:12.693 "bdev_name": "Malloc3", 00:16:12.693 "name": "Malloc3", 00:16:12.693 "nguid": "C8153DA4C2C945CE9CB5108010EA1F5C", 00:16:12.693 "uuid": "c8153da4-c2c9-45ce-9cb5-108010ea1f5c" 00:16:12.693 } 00:16:12.693 ] 00:16:12.693 }, 00:16:12.693 { 00:16:12.693 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:12.693 "subtype": "NVMe", 00:16:12.693 "listen_addresses": [ 00:16:12.693 { 00:16:12.693 "trtype": "VFIOUSER", 00:16:12.693 "adrfam": "IPv4", 00:16:12.693 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:12.693 "trsvcid": "0" 00:16:12.693 } 00:16:12.693 ], 00:16:12.693 "allow_any_host": true, 00:16:12.693 "hosts": [], 00:16:12.693 "serial_number": "SPDK2", 00:16:12.693 "model_number": "SPDK bdev Controller", 00:16:12.693 "max_namespaces": 32, 00:16:12.693 "min_cntlid": 1, 00:16:12.693 "max_cntlid": 65519, 00:16:12.693 "namespaces": [ 00:16:12.693 { 00:16:12.693 "nsid": 1, 00:16:12.693 "bdev_name": "Malloc2", 00:16:12.693 "name": "Malloc2", 00:16:12.693 "nguid": "B293A31C34404335AE000B8D457C4CAD", 00:16:12.693 "uuid": "b293a31c-3440-4335-ae00-0b8d457c4cad" 00:16:12.693 }, 00:16:12.693 { 00:16:12.693 "nsid": 2, 00:16:12.693 "bdev_name": "Malloc4", 00:16:12.693 "name": "Malloc4", 00:16:12.693 "nguid": "37C2B910FD0D41E2AB4528EEDBC733DB", 00:16:12.693 "uuid": "37c2b910-fd0d-41e2-ab45-28eedbc733db" 00:16:12.693 } 00:16:12.693 ] 00:16:12.693 } 00:16:12.693 ] 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2093833 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2085079 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2085079 ']' 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2085079 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2085079 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2085079' 00:16:12.693 killing process with pid 2085079 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2085079 00:16:12.693 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2085079 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2094051 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2094051' 00:16:12.957 Process pid: 2094051 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2094051 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2094051 ']' 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.957 18:28:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:12.957 [2024-12-06 18:28:07.604371] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:12.957 [2024-12-06 18:28:07.605299] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:16:12.957 [2024-12-06 18:28:07.605343] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.957 [2024-12-06 18:28:07.692331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:12.957 [2024-12-06 18:28:07.727462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.957 [2024-12-06 18:28:07.727499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.957 [2024-12-06 18:28:07.727506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.957 [2024-12-06 18:28:07.727510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.957 [2024-12-06 18:28:07.727515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.957 [2024-12-06 18:28:07.728757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.957 [2024-12-06 18:28:07.728910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.957 [2024-12-06 18:28:07.729060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.957 [2024-12-06 18:28:07.729062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.218 [2024-12-06 18:28:07.782881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:13.218 [2024-12-06 18:28:07.783857] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:13.218 [2024-12-06 18:28:07.784560] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:13.218 [2024-12-06 18:28:07.785387] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:13.218 [2024-12-06 18:28:07.785415] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:13.789 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.790 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:13.790 18:28:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:14.732 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:14.993 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:14.993 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:14.993 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.993 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:14.993 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:15.254 Malloc1 00:16:15.254 18:28:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:15.254 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:15.514 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:15.774 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:15.774 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:15.774 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:16.035 Malloc2 00:16:16.035 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:16.035 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:16.297 18:28:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2094051 ']' 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094051' 00:16:16.557 killing process with pid 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2094051 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:16.557 00:16:16.557 real 0m50.354s 00:16:16.557 user 3m12.845s 00:16:16.557 sys 0m2.664s 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.557 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:16.557 ************************************ 00:16:16.557 END TEST nvmf_vfio_user 00:16:16.557 ************************************ 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.819 ************************************ 00:16:16.819 START TEST nvmf_vfio_user_nvme_compliance 00:16:16.819 ************************************ 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:16.819 * Looking for test storage... 00:16:16.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.819 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:16.820 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.081 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.082 --rc genhtml_branch_coverage=1 00:16:17.082 --rc genhtml_function_coverage=1 00:16:17.082 --rc genhtml_legend=1 00:16:17.082 --rc geninfo_all_blocks=1 00:16:17.082 --rc geninfo_unexecuted_blocks=1 00:16:17.082 00:16:17.082 ' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.082 --rc genhtml_branch_coverage=1 00:16:17.082 --rc genhtml_function_coverage=1 00:16:17.082 --rc genhtml_legend=1 00:16:17.082 --rc geninfo_all_blocks=1 00:16:17.082 --rc geninfo_unexecuted_blocks=1 00:16:17.082 00:16:17.082 ' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.082 --rc genhtml_branch_coverage=1 00:16:17.082 --rc genhtml_function_coverage=1 00:16:17.082 --rc genhtml_legend=1 00:16:17.082 --rc geninfo_all_blocks=1 00:16:17.082 --rc geninfo_unexecuted_blocks=1 00:16:17.082 00:16:17.082 ' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:17.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.082 --rc genhtml_branch_coverage=1 00:16:17.082 --rc genhtml_function_coverage=1 00:16:17.082 --rc genhtml_legend=1 00:16:17.082 --rc geninfo_all_blocks=1 00:16:17.082 --rc geninfo_unexecuted_blocks=1 00:16:17.082 00:16:17.082 ' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2094926 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2094926' 00:16:17.082 Process pid: 2094926 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2094926 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2094926 ']' 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.082 18:28:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.082 [2024-12-06 18:28:11.707368] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:16:17.082 [2024-12-06 18:28:11.707422] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.082 [2024-12-06 18:28:11.789597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.083 [2024-12-06 18:28:11.819918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.083 [2024-12-06 18:28:11.819959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.083 [2024-12-06 18:28:11.819965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.083 [2024-12-06 18:28:11.819970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.083 [2024-12-06 18:28:11.819974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.083 [2024-12-06 18:28:11.821117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.083 [2024-12-06 18:28:11.821264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.083 [2024-12-06 18:28:11.821267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.025 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.025 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:18.025 18:28:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 malloc0 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.967 18:28:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:18.967 00:16:18.967 00:16:18.967 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.967 http://cunit.sourceforge.net/ 00:16:18.967 00:16:18.967 00:16:18.967 Suite: nvme_compliance 00:16:18.967 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 18:28:13.741543] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.967 [2024-12-06 18:28:13.742849] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:18.967 [2024-12-06 18:28:13.742861] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:18.967 [2024-12-06 18:28:13.742866] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:18.967 [2024-12-06 18:28:13.744565] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.229 passed 00:16:19.229 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 18:28:13.816036] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.229 [2024-12-06 18:28:13.819058] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.229 passed 00:16:19.229 Test: admin_identify_ns ...[2024-12-06 18:28:13.896632] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.229 [2024-12-06 18:28:13.958647] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:19.229 [2024-12-06 18:28:13.966652] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:19.229 [2024-12-06 18:28:13.987732] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.490 passed 00:16:19.490 Test: admin_get_features_mandatory_features ...[2024-12-06 18:28:14.061975] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.490 [2024-12-06 18:28:14.067001] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.490 passed 00:16:19.490 Test: admin_get_features_optional_features ...[2024-12-06 18:28:14.142456] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.490 [2024-12-06 18:28:14.145483] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.490 passed 00:16:19.490 Test: admin_set_features_number_of_queues ...[2024-12-06 18:28:14.220007] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.752 [2024-12-06 18:28:14.324732] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.752 passed 00:16:19.752 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 18:28:14.402793] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.752 [2024-12-06 18:28:14.405812] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.752 passed 00:16:19.752 Test: admin_get_log_page_with_lpo ...[2024-12-06 18:28:14.478553] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.013 [2024-12-06 18:28:14.547644] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:20.013 [2024-12-06 18:28:14.559752] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.013 passed 00:16:20.013 Test: fabric_property_get ...[2024-12-06 18:28:14.631971] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.013 [2024-12-06 18:28:14.633164] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:20.013 [2024-12-06 18:28:14.635992] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.013 passed 00:16:20.013 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 18:28:14.712440] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.013 [2024-12-06 18:28:14.713645] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:20.013 [2024-12-06 18:28:14.715462] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.013 passed 00:16:20.013 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 18:28:14.789981] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.274 [2024-12-06 18:28:14.873643] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.274 [2024-12-06 18:28:14.889643] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.274 [2024-12-06 18:28:14.894718] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.274 passed 00:16:20.274 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 18:28:14.970762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.274 [2024-12-06 18:28:14.971969] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:20.274 [2024-12-06 18:28:14.973778] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.274 passed 00:16:20.274 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 18:28:15.050012] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.561 [2024-12-06 18:28:15.126647] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:20.561 [2024-12-06 18:28:15.150647] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:20.561 [2024-12-06 18:28:15.155712] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.561 passed 00:16:20.561 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 18:28:15.228895] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.561 [2024-12-06 18:28:15.230103] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:20.561 [2024-12-06 18:28:15.230123] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:20.561 [2024-12-06 18:28:15.231921] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.561 passed 00:16:20.561 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 18:28:15.306685] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.854 [2024-12-06 18:28:15.399648] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:20.855 [2024-12-06 18:28:15.407643] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:20.855 [2024-12-06 18:28:15.415643] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:20.855 [2024-12-06 18:28:15.423643] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:20.855 [2024-12-06 18:28:15.452713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.855 passed 00:16:20.855 Test: admin_create_io_sq_verify_pc ...[2024-12-06 18:28:15.525900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.855 [2024-12-06 18:28:15.542651] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:20.855 [2024-12-06 18:28:15.560057] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.855 passed 00:16:21.198 Test: admin_create_io_qp_max_qps ...[2024-12-06 18:28:15.638531] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.158 [2024-12-06 18:28:16.748647] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:22.418 [2024-12-06 18:28:17.123321] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.418 passed 00:16:22.418 Test: admin_create_io_sq_shared_cq ...[2024-12-06 18:28:17.199114] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.677 [2024-12-06 18:28:17.331645] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:22.677 [2024-12-06 18:28:17.368686] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.677 passed 00:16:22.677 00:16:22.677 Run Summary: Type Total Ran Passed Failed Inactive 00:16:22.677 suites 1 1 n/a 0 0 00:16:22.677 tests 18 18 18 0 0 00:16:22.677 asserts 360 360 360 0 n/a 00:16:22.677 00:16:22.677 Elapsed time = 1.493 seconds 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2094926 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2094926 ']' 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2094926 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.677 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2094926 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2094926' 00:16:22.938 killing process with pid 2094926 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2094926 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2094926 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:22.938 00:16:22.938 real 0m6.188s 00:16:22.938 user 0m17.533s 00:16:22.938 sys 0m0.533s 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:22.938 ************************************ 00:16:22.938 END TEST nvmf_vfio_user_nvme_compliance 00:16:22.938 ************************************ 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:22.938 ************************************ 00:16:22.938 START TEST nvmf_vfio_user_fuzz 00:16:22.938 ************************************ 00:16:22.938 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:23.200 * Looking for test storage... 00:16:23.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.200 --rc genhtml_branch_coverage=1 00:16:23.200 --rc genhtml_function_coverage=1 00:16:23.200 --rc genhtml_legend=1 00:16:23.200 --rc geninfo_all_blocks=1 00:16:23.200 --rc geninfo_unexecuted_blocks=1 00:16:23.200 00:16:23.200 ' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.200 --rc genhtml_branch_coverage=1 00:16:23.200 --rc genhtml_function_coverage=1 00:16:23.200 --rc genhtml_legend=1 00:16:23.200 --rc geninfo_all_blocks=1 00:16:23.200 --rc geninfo_unexecuted_blocks=1 00:16:23.200 00:16:23.200 ' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.200 --rc genhtml_branch_coverage=1 00:16:23.200 --rc genhtml_function_coverage=1 00:16:23.200 --rc genhtml_legend=1 00:16:23.200 --rc geninfo_all_blocks=1 00:16:23.200 --rc geninfo_unexecuted_blocks=1 00:16:23.200 00:16:23.200 ' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.200 --rc genhtml_branch_coverage=1 00:16:23.200 --rc genhtml_function_coverage=1 00:16:23.200 --rc genhtml_legend=1 00:16:23.200 --rc geninfo_all_blocks=1 00:16:23.200 --rc geninfo_unexecuted_blocks=1 00:16:23.200 00:16:23.200 ' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.200 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2096273 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2096273' 00:16:23.201 Process pid: 2096273 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2096273 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2096273 ']' 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.201 18:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:24.142 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.143 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:24.143 18:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 malloc0 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:25.088 18:28:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:57.190 Fuzzing completed. Shutting down the fuzz application 00:16:57.190 00:16:57.190 Dumping successful admin opcodes: 00:16:57.190 9, 10, 00:16:57.190 Dumping successful io opcodes: 00:16:57.190 0, 00:16:57.190 NS: 0x20000081ef00 I/O qp, Total commands completed: 1248057, total successful commands: 4899, random_seed: 502990464 00:16:57.190 NS: 0x20000081ef00 admin qp, Total commands completed: 280288, total successful commands: 65, random_seed: 1713386048 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2096273 ']' 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2096273' 00:16:57.190 killing process with pid 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2096273 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:57.190 00:16:57.190 real 0m32.803s 00:16:57.190 user 0m35.196s 00:16:57.190 sys 0m26.238s 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:57.190 ************************************ 00:16:57.190 END TEST nvmf_vfio_user_fuzz 00:16:57.190 ************************************ 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.190 ************************************ 00:16:57.190 START TEST nvmf_auth_target 00:16:57.190 ************************************ 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:57.190 * Looking for test storage... 00:16:57.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:57.190 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:57.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.191 --rc genhtml_branch_coverage=1 00:16:57.191 --rc genhtml_function_coverage=1 00:16:57.191 --rc genhtml_legend=1 00:16:57.191 --rc geninfo_all_blocks=1 00:16:57.191 --rc geninfo_unexecuted_blocks=1 00:16:57.191 00:16:57.191 ' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:57.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.191 --rc genhtml_branch_coverage=1 00:16:57.191 --rc genhtml_function_coverage=1 00:16:57.191 --rc genhtml_legend=1 00:16:57.191 --rc geninfo_all_blocks=1 00:16:57.191 --rc geninfo_unexecuted_blocks=1 00:16:57.191 00:16:57.191 ' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:57.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.191 --rc genhtml_branch_coverage=1 00:16:57.191 --rc genhtml_function_coverage=1 00:16:57.191 --rc genhtml_legend=1 00:16:57.191 --rc geninfo_all_blocks=1 00:16:57.191 --rc geninfo_unexecuted_blocks=1 00:16:57.191 00:16:57.191 ' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:57.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.191 --rc genhtml_branch_coverage=1 00:16:57.191 --rc genhtml_function_coverage=1 00:16:57.191 --rc genhtml_legend=1 00:16:57.191 --rc geninfo_all_blocks=1 00:16:57.191 --rc geninfo_unexecuted_blocks=1 00:16:57.191 00:16:57.191 ' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:57.191 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.192 18:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:03.780 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:03.780 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:03.781 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:03.781 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:03.781 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.781 18:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:17:03.781 00:17:03.781 --- 10.0.0.2 ping statistics --- 00:17:03.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.781 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:17:03.781 00:17:03.781 --- 10.0.0.1 ping statistics --- 00:17:03.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.781 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2106313 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2106313 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2106313 ']' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2106333 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:03.781 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=49f5b94f4b566ac1b11d8f8a6c8b1c303d18df959f6811b7 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Mbb 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 49f5b94f4b566ac1b11d8f8a6c8b1c303d18df959f6811b7 0 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 49f5b94f4b566ac1b11d8f8a6c8b1c303d18df959f6811b7 0 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=49f5b94f4b566ac1b11d8f8a6c8b1c303d18df959f6811b7 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:03.782 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Mbb 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Mbb 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Mbb 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5efbff52582d620dd521e5d2c385481a106256e649d6efd4f05ff5c4f7c72b1c 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.PE1 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5efbff52582d620dd521e5d2c385481a106256e649d6efd4f05ff5c4f7c72b1c 3 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5efbff52582d620dd521e5d2c385481a106256e649d6efd4f05ff5c4f7c72b1c 3 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5efbff52582d620dd521e5d2c385481a106256e649d6efd4f05ff5c4f7c72b1c 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.PE1 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.PE1 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.PE1 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:04.043 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a87378bf3a8de5756dc569f136ea7f8 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.u4J 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a87378bf3a8de5756dc569f136ea7f8 1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a87378bf3a8de5756dc569f136ea7f8 1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a87378bf3a8de5756dc569f136ea7f8 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.u4J 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.u4J 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.u4J 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b38b53b1a52abc1b0ec80dcec1b416704de783348dad80c2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hFN 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b38b53b1a52abc1b0ec80dcec1b416704de783348dad80c2 2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b38b53b1a52abc1b0ec80dcec1b416704de783348dad80c2 2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b38b53b1a52abc1b0ec80dcec1b416704de783348dad80c2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hFN 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hFN 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hFN 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cde2f157813b141ef592062db6fe4234092572cabfce02d3 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.jS9 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cde2f157813b141ef592062db6fe4234092572cabfce02d3 2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cde2f157813b141ef592062db6fe4234092572cabfce02d3 2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cde2f157813b141ef592062db6fe4234092572cabfce02d3 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:04.044 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.jS9 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.jS9 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.jS9 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=22a61abbffc28c4dbfe7453295779cd2 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HWz 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 22a61abbffc28c4dbfe7453295779cd2 1 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 22a61abbffc28c4dbfe7453295779cd2 1 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=22a61abbffc28c4dbfe7453295779cd2 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HWz 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HWz 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.HWz 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f719975042fbdc18d8d0df9a9421db5dda56d9a6655a89139730419e86bb7d91 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qhY 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f719975042fbdc18d8d0df9a9421db5dda56d9a6655a89139730419e86bb7d91 3 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f719975042fbdc18d8d0df9a9421db5dda56d9a6655a89139730419e86bb7d91 3 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f719975042fbdc18d8d0df9a9421db5dda56d9a6655a89139730419e86bb7d91 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:04.306 18:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:04.306 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qhY 00:17:04.306 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qhY 00:17:04.306 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.qhY 00:17:04.306 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2106313 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2106313 ']' 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.307 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2106333 /var/tmp/host.sock 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2106333 ']' 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:04.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.568 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mbb 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Mbb 00:17:04.829 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Mbb 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.PE1 ]] 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PE1 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PE1 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PE1 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.u4J 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.090 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.u4J 00:17:05.091 18:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.u4J 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hFN ]] 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hFN 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hFN 00:17:05.351 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hFN 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jS9 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.jS9 00:17:05.613 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.jS9 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.HWz ]] 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HWz 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HWz 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HWz 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qhY 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.qhY 00:17:05.874 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.qhY 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.136 18:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.397 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.659 00:17:06.659 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.659 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.659 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.921 { 00:17:06.921 "cntlid": 1, 00:17:06.921 "qid": 0, 00:17:06.921 "state": "enabled", 00:17:06.921 "thread": "nvmf_tgt_poll_group_000", 00:17:06.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.921 "listen_address": { 00:17:06.921 "trtype": "TCP", 00:17:06.921 "adrfam": "IPv4", 00:17:06.921 "traddr": "10.0.0.2", 00:17:06.921 "trsvcid": "4420" 00:17:06.921 }, 00:17:06.921 "peer_address": { 00:17:06.921 "trtype": "TCP", 00:17:06.921 "adrfam": "IPv4", 00:17:06.921 "traddr": "10.0.0.1", 00:17:06.921 "trsvcid": "49858" 00:17:06.921 }, 00:17:06.921 "auth": { 00:17:06.921 "state": "completed", 00:17:06.921 "digest": "sha256", 00:17:06.921 "dhgroup": "null" 00:17:06.921 } 00:17:06.921 } 00:17:06.921 ]' 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.921 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.182 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:07.182 18:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.755 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.017 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.276 00:17:08.276 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.277 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.277 18:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.537 { 00:17:08.537 "cntlid": 3, 00:17:08.537 "qid": 0, 00:17:08.537 "state": "enabled", 00:17:08.537 "thread": "nvmf_tgt_poll_group_000", 00:17:08.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:08.537 "listen_address": { 00:17:08.537 "trtype": "TCP", 00:17:08.537 "adrfam": "IPv4", 00:17:08.537 "traddr": "10.0.0.2", 00:17:08.537 "trsvcid": "4420" 00:17:08.537 }, 00:17:08.537 "peer_address": { 00:17:08.537 "trtype": "TCP", 00:17:08.537 "adrfam": "IPv4", 00:17:08.537 "traddr": "10.0.0.1", 00:17:08.537 "trsvcid": "49886" 00:17:08.537 }, 00:17:08.537 "auth": { 00:17:08.537 "state": "completed", 00:17:08.537 "digest": "sha256", 00:17:08.537 "dhgroup": "null" 00:17:08.537 } 00:17:08.537 } 00:17:08.537 ]' 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.537 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.798 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:08.798 18:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.370 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.629 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.889 00:17:09.889 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.889 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.889 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.149 { 00:17:10.149 "cntlid": 5, 00:17:10.149 "qid": 0, 00:17:10.149 "state": "enabled", 00:17:10.149 "thread": "nvmf_tgt_poll_group_000", 00:17:10.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.149 "listen_address": { 00:17:10.149 "trtype": "TCP", 00:17:10.149 "adrfam": "IPv4", 00:17:10.149 "traddr": "10.0.0.2", 00:17:10.149 "trsvcid": "4420" 00:17:10.149 }, 00:17:10.149 "peer_address": { 00:17:10.149 "trtype": "TCP", 00:17:10.149 "adrfam": "IPv4", 00:17:10.149 "traddr": "10.0.0.1", 00:17:10.149 "trsvcid": "58338" 00:17:10.149 }, 00:17:10.149 "auth": { 00:17:10.149 "state": "completed", 00:17:10.149 "digest": "sha256", 00:17:10.149 "dhgroup": "null" 00:17:10.149 } 00:17:10.149 } 00:17:10.149 ]' 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.149 18:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.468 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:10.468 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:11.043 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.303 18:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:11.303 00:17:11.562 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.563 { 00:17:11.563 "cntlid": 7, 00:17:11.563 "qid": 0, 00:17:11.563 "state": "enabled", 00:17:11.563 "thread": "nvmf_tgt_poll_group_000", 00:17:11.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.563 "listen_address": { 00:17:11.563 "trtype": "TCP", 00:17:11.563 "adrfam": "IPv4", 00:17:11.563 "traddr": "10.0.0.2", 00:17:11.563 "trsvcid": "4420" 00:17:11.563 }, 00:17:11.563 "peer_address": { 00:17:11.563 "trtype": "TCP", 00:17:11.563 "adrfam": "IPv4", 00:17:11.563 "traddr": "10.0.0.1", 00:17:11.563 "trsvcid": "58372" 00:17:11.563 }, 00:17:11.563 "auth": { 00:17:11.563 "state": "completed", 00:17:11.563 "digest": "sha256", 00:17:11.563 "dhgroup": "null" 00:17:11.563 } 00:17:11.563 } 00:17:11.563 ]' 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.563 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.823 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.823 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.823 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.823 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.823 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.084 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:12.084 18:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.654 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.915 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.916 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.177 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.177 { 00:17:13.177 "cntlid": 9, 00:17:13.177 "qid": 0, 00:17:13.177 "state": "enabled", 00:17:13.177 "thread": "nvmf_tgt_poll_group_000", 00:17:13.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.177 "listen_address": { 00:17:13.177 "trtype": "TCP", 00:17:13.177 "adrfam": "IPv4", 00:17:13.177 "traddr": "10.0.0.2", 00:17:13.177 "trsvcid": "4420" 00:17:13.177 }, 00:17:13.177 "peer_address": { 00:17:13.177 "trtype": "TCP", 00:17:13.177 "adrfam": "IPv4", 00:17:13.177 "traddr": "10.0.0.1", 00:17:13.177 "trsvcid": "58390" 00:17:13.177 }, 00:17:13.177 "auth": { 00:17:13.177 "state": "completed", 00:17:13.177 "digest": "sha256", 00:17:13.177 "dhgroup": "ffdhe2048" 00:17:13.177 } 00:17:13.177 } 00:17:13.177 ]' 00:17:13.177 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.438 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.438 18:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.438 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.438 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.438 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.438 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.438 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.699 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:13.699 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.272 18:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.533 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.793 00:17:14.793 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.793 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.793 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.793 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.794 { 00:17:14.794 "cntlid": 11, 00:17:14.794 "qid": 0, 00:17:14.794 "state": "enabled", 00:17:14.794 "thread": "nvmf_tgt_poll_group_000", 00:17:14.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:14.794 "listen_address": { 00:17:14.794 "trtype": "TCP", 00:17:14.794 "adrfam": "IPv4", 00:17:14.794 "traddr": "10.0.0.2", 00:17:14.794 "trsvcid": "4420" 00:17:14.794 }, 00:17:14.794 "peer_address": { 00:17:14.794 "trtype": "TCP", 00:17:14.794 "adrfam": "IPv4", 00:17:14.794 "traddr": "10.0.0.1", 00:17:14.794 "trsvcid": "58436" 00:17:14.794 }, 00:17:14.794 "auth": { 00:17:14.794 "state": "completed", 00:17:14.794 "digest": "sha256", 00:17:14.794 "dhgroup": "ffdhe2048" 00:17:14.794 } 00:17:14.794 } 00:17:14.794 ]' 00:17:14.794 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.054 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.315 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:15.315 18:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.886 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.887 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.147 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:16.147 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.148 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.408 00:17:16.408 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.408 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.408 18:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.408 { 00:17:16.408 "cntlid": 13, 00:17:16.408 "qid": 0, 00:17:16.408 "state": "enabled", 00:17:16.408 "thread": "nvmf_tgt_poll_group_000", 00:17:16.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.408 "listen_address": { 00:17:16.408 "trtype": "TCP", 00:17:16.408 "adrfam": "IPv4", 00:17:16.408 "traddr": "10.0.0.2", 00:17:16.408 "trsvcid": "4420" 00:17:16.408 }, 00:17:16.408 "peer_address": { 00:17:16.408 "trtype": "TCP", 00:17:16.408 "adrfam": "IPv4", 00:17:16.408 "traddr": "10.0.0.1", 00:17:16.408 "trsvcid": "58466" 00:17:16.408 }, 00:17:16.408 "auth": { 00:17:16.408 "state": "completed", 00:17:16.408 "digest": "sha256", 00:17:16.408 "dhgroup": "ffdhe2048" 00:17:16.408 } 00:17:16.408 } 00:17:16.408 ]' 00:17:16.408 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.668 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.929 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:16.929 18:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.502 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.763 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:17.763 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.763 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.764 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.025 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.025 { 00:17:18.025 "cntlid": 15, 00:17:18.025 "qid": 0, 00:17:18.025 "state": "enabled", 00:17:18.025 "thread": "nvmf_tgt_poll_group_000", 00:17:18.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.025 "listen_address": { 00:17:18.025 "trtype": "TCP", 00:17:18.025 "adrfam": "IPv4", 00:17:18.025 "traddr": "10.0.0.2", 00:17:18.025 "trsvcid": "4420" 00:17:18.025 }, 00:17:18.025 "peer_address": { 00:17:18.025 "trtype": "TCP", 00:17:18.025 "adrfam": "IPv4", 00:17:18.025 "traddr": "10.0.0.1", 00:17:18.025 "trsvcid": "58490" 00:17:18.025 }, 00:17:18.025 "auth": { 00:17:18.025 "state": "completed", 00:17:18.025 "digest": "sha256", 00:17:18.025 "dhgroup": "ffdhe2048" 00:17:18.025 } 00:17:18.025 } 00:17:18.025 ]' 00:17:18.025 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.287 18:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.549 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:18.549 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.120 18:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.380 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.640 00:17:19.640 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.640 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.640 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.900 { 00:17:19.900 "cntlid": 17, 00:17:19.900 "qid": 0, 00:17:19.900 "state": "enabled", 00:17:19.900 "thread": "nvmf_tgt_poll_group_000", 00:17:19.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.900 "listen_address": { 00:17:19.900 "trtype": "TCP", 00:17:19.900 "adrfam": "IPv4", 00:17:19.900 "traddr": "10.0.0.2", 00:17:19.900 "trsvcid": "4420" 00:17:19.900 }, 00:17:19.900 "peer_address": { 00:17:19.900 "trtype": "TCP", 00:17:19.900 "adrfam": "IPv4", 00:17:19.900 "traddr": "10.0.0.1", 00:17:19.900 "trsvcid": "37158" 00:17:19.900 }, 00:17:19.900 "auth": { 00:17:19.900 "state": "completed", 00:17:19.900 "digest": "sha256", 00:17:19.900 "dhgroup": "ffdhe3072" 00:17:19.900 } 00:17:19.900 } 00:17:19.900 ]' 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.900 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.160 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:20.160 18:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:20.732 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.733 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.993 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.253 00:17:21.253 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.253 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.253 18:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.514 { 00:17:21.514 "cntlid": 19, 00:17:21.514 "qid": 0, 00:17:21.514 "state": "enabled", 00:17:21.514 "thread": "nvmf_tgt_poll_group_000", 00:17:21.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:21.514 "listen_address": { 00:17:21.514 "trtype": "TCP", 00:17:21.514 "adrfam": "IPv4", 00:17:21.514 "traddr": "10.0.0.2", 00:17:21.514 "trsvcid": "4420" 00:17:21.514 }, 00:17:21.514 "peer_address": { 00:17:21.514 "trtype": "TCP", 00:17:21.514 "adrfam": "IPv4", 00:17:21.514 "traddr": "10.0.0.1", 00:17:21.514 "trsvcid": "37182" 00:17:21.514 }, 00:17:21.514 "auth": { 00:17:21.514 "state": "completed", 00:17:21.514 "digest": "sha256", 00:17:21.514 "dhgroup": "ffdhe3072" 00:17:21.514 } 00:17:21.514 } 00:17:21.514 ]' 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.514 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.775 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:21.775 18:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:22.345 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.345 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:22.345 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.345 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.605 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.606 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.866 00:17:22.866 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.866 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.866 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.126 { 00:17:23.126 "cntlid": 21, 00:17:23.126 "qid": 0, 00:17:23.126 "state": "enabled", 00:17:23.126 "thread": "nvmf_tgt_poll_group_000", 00:17:23.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.126 "listen_address": { 00:17:23.126 "trtype": "TCP", 00:17:23.126 "adrfam": "IPv4", 00:17:23.126 "traddr": "10.0.0.2", 00:17:23.126 "trsvcid": "4420" 00:17:23.126 }, 00:17:23.126 "peer_address": { 00:17:23.126 "trtype": "TCP", 00:17:23.126 "adrfam": "IPv4", 00:17:23.126 "traddr": "10.0.0.1", 00:17:23.126 "trsvcid": "37214" 00:17:23.126 }, 00:17:23.126 "auth": { 00:17:23.126 "state": "completed", 00:17:23.126 "digest": "sha256", 00:17:23.126 "dhgroup": "ffdhe3072" 00:17:23.126 } 00:17:23.126 } 00:17:23.126 ]' 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.126 18:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.386 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:23.386 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.957 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.958 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.958 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.218 18:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.480 00:17:24.480 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.480 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.480 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.742 { 00:17:24.742 "cntlid": 23, 00:17:24.742 "qid": 0, 00:17:24.742 "state": "enabled", 00:17:24.742 "thread": "nvmf_tgt_poll_group_000", 00:17:24.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.742 "listen_address": { 00:17:24.742 "trtype": "TCP", 00:17:24.742 "adrfam": "IPv4", 00:17:24.742 "traddr": "10.0.0.2", 00:17:24.742 "trsvcid": "4420" 00:17:24.742 }, 00:17:24.742 "peer_address": { 00:17:24.742 "trtype": "TCP", 00:17:24.742 "adrfam": "IPv4", 00:17:24.742 "traddr": "10.0.0.1", 00:17:24.742 "trsvcid": "37254" 00:17:24.742 }, 00:17:24.742 "auth": { 00:17:24.742 "state": "completed", 00:17:24.742 "digest": "sha256", 00:17:24.742 "dhgroup": "ffdhe3072" 00:17:24.742 } 00:17:24.742 } 00:17:24.742 ]' 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.742 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.002 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:25.002 18:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.573 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.832 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.091 00:17:26.091 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.091 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.091 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.351 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.351 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.351 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.351 18:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.351 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.351 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.351 { 00:17:26.351 "cntlid": 25, 00:17:26.351 "qid": 0, 00:17:26.351 "state": "enabled", 00:17:26.351 "thread": "nvmf_tgt_poll_group_000", 00:17:26.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:26.351 "listen_address": { 00:17:26.351 "trtype": "TCP", 00:17:26.351 "adrfam": "IPv4", 00:17:26.351 "traddr": "10.0.0.2", 00:17:26.351 "trsvcid": "4420" 00:17:26.351 }, 00:17:26.351 "peer_address": { 00:17:26.351 "trtype": "TCP", 00:17:26.351 "adrfam": "IPv4", 00:17:26.351 "traddr": "10.0.0.1", 00:17:26.351 "trsvcid": "37276" 00:17:26.351 }, 00:17:26.351 "auth": { 00:17:26.351 "state": "completed", 00:17:26.351 "digest": "sha256", 00:17:26.351 "dhgroup": "ffdhe4096" 00:17:26.351 } 00:17:26.351 } 00:17:26.351 ]' 00:17:26.351 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.352 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.352 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.352 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.352 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.613 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.613 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.613 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.613 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:26.613 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:27.555 18:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.555 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.816 00:17:27.816 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.816 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.816 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.076 { 00:17:28.076 "cntlid": 27, 00:17:28.076 "qid": 0, 00:17:28.076 "state": "enabled", 00:17:28.076 "thread": "nvmf_tgt_poll_group_000", 00:17:28.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.076 "listen_address": { 00:17:28.076 "trtype": "TCP", 00:17:28.076 "adrfam": "IPv4", 00:17:28.076 "traddr": "10.0.0.2", 00:17:28.076 "trsvcid": "4420" 00:17:28.076 }, 00:17:28.076 "peer_address": { 00:17:28.076 "trtype": "TCP", 00:17:28.076 "adrfam": "IPv4", 00:17:28.076 "traddr": "10.0.0.1", 00:17:28.076 "trsvcid": "37302" 00:17:28.076 }, 00:17:28.076 "auth": { 00:17:28.076 "state": "completed", 00:17:28.076 "digest": "sha256", 00:17:28.076 "dhgroup": "ffdhe4096" 00:17:28.076 } 00:17:28.076 } 00:17:28.076 ]' 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.076 18:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.336 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:28.336 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.908 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.169 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.170 18:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.431 00:17:29.431 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.431 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.431 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.710 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.710 { 00:17:29.710 "cntlid": 29, 00:17:29.710 "qid": 0, 00:17:29.710 "state": "enabled", 00:17:29.710 "thread": "nvmf_tgt_poll_group_000", 00:17:29.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:29.710 "listen_address": { 00:17:29.710 "trtype": "TCP", 00:17:29.710 "adrfam": "IPv4", 00:17:29.710 "traddr": "10.0.0.2", 00:17:29.710 "trsvcid": "4420" 00:17:29.710 }, 00:17:29.710 "peer_address": { 00:17:29.710 "trtype": "TCP", 00:17:29.710 "adrfam": "IPv4", 00:17:29.710 "traddr": "10.0.0.1", 00:17:29.710 "trsvcid": "49978" 00:17:29.710 }, 00:17:29.711 "auth": { 00:17:29.711 "state": "completed", 00:17:29.711 "digest": "sha256", 00:17:29.711 "dhgroup": "ffdhe4096" 00:17:29.711 } 00:17:29.711 } 00:17:29.711 ]' 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.711 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.987 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:29.987 18:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.601 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.861 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.121 00:17:31.121 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.121 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.121 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.382 { 00:17:31.382 "cntlid": 31, 00:17:31.382 "qid": 0, 00:17:31.382 "state": "enabled", 00:17:31.382 "thread": "nvmf_tgt_poll_group_000", 00:17:31.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:31.382 "listen_address": { 00:17:31.382 "trtype": "TCP", 00:17:31.382 "adrfam": "IPv4", 00:17:31.382 "traddr": "10.0.0.2", 00:17:31.382 "trsvcid": "4420" 00:17:31.382 }, 00:17:31.382 "peer_address": { 00:17:31.382 "trtype": "TCP", 00:17:31.382 "adrfam": "IPv4", 00:17:31.382 "traddr": "10.0.0.1", 00:17:31.382 "trsvcid": "50006" 00:17:31.382 }, 00:17:31.382 "auth": { 00:17:31.382 "state": "completed", 00:17:31.382 "digest": "sha256", 00:17:31.382 "dhgroup": "ffdhe4096" 00:17:31.382 } 00:17:31.382 } 00:17:31.382 ]' 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.382 18:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.382 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.382 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.382 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.382 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.382 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.643 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:31.643 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.214 18:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.475 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.735 00:17:32.735 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.735 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.735 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.018 { 00:17:33.018 "cntlid": 33, 00:17:33.018 "qid": 0, 00:17:33.018 "state": "enabled", 00:17:33.018 "thread": "nvmf_tgt_poll_group_000", 00:17:33.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.018 "listen_address": { 00:17:33.018 "trtype": "TCP", 00:17:33.018 "adrfam": "IPv4", 00:17:33.018 "traddr": "10.0.0.2", 00:17:33.018 "trsvcid": "4420" 00:17:33.018 }, 00:17:33.018 "peer_address": { 00:17:33.018 "trtype": "TCP", 00:17:33.018 "adrfam": "IPv4", 00:17:33.018 "traddr": "10.0.0.1", 00:17:33.018 "trsvcid": "50030" 00:17:33.018 }, 00:17:33.018 "auth": { 00:17:33.018 "state": "completed", 00:17:33.018 "digest": "sha256", 00:17:33.018 "dhgroup": "ffdhe6144" 00:17:33.018 } 00:17:33.018 } 00:17:33.018 ]' 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.018 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.278 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:33.278 18:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.848 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.109 18:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.369 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.630 { 00:17:34.630 "cntlid": 35, 00:17:34.630 "qid": 0, 00:17:34.630 "state": "enabled", 00:17:34.630 "thread": "nvmf_tgt_poll_group_000", 00:17:34.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.630 "listen_address": { 00:17:34.630 "trtype": "TCP", 00:17:34.630 "adrfam": "IPv4", 00:17:34.630 "traddr": "10.0.0.2", 00:17:34.630 "trsvcid": "4420" 00:17:34.630 }, 00:17:34.630 "peer_address": { 00:17:34.630 "trtype": "TCP", 00:17:34.630 "adrfam": "IPv4", 00:17:34.630 "traddr": "10.0.0.1", 00:17:34.630 "trsvcid": "50048" 00:17:34.630 }, 00:17:34.630 "auth": { 00:17:34.630 "state": "completed", 00:17:34.630 "digest": "sha256", 00:17:34.630 "dhgroup": "ffdhe6144" 00:17:34.630 } 00:17:34.630 } 00:17:34.630 ]' 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.630 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:34.889 18:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.830 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:36.091 00:17:36.091 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.091 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.091 18:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.353 { 00:17:36.353 "cntlid": 37, 00:17:36.353 "qid": 0, 00:17:36.353 "state": "enabled", 00:17:36.353 "thread": "nvmf_tgt_poll_group_000", 00:17:36.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:36.353 "listen_address": { 00:17:36.353 "trtype": "TCP", 00:17:36.353 "adrfam": "IPv4", 00:17:36.353 "traddr": "10.0.0.2", 00:17:36.353 "trsvcid": "4420" 00:17:36.353 }, 00:17:36.353 "peer_address": { 00:17:36.353 "trtype": "TCP", 00:17:36.353 "adrfam": "IPv4", 00:17:36.353 "traddr": "10.0.0.1", 00:17:36.353 "trsvcid": "50080" 00:17:36.353 }, 00:17:36.353 "auth": { 00:17:36.353 "state": "completed", 00:17:36.353 "digest": "sha256", 00:17:36.353 "dhgroup": "ffdhe6144" 00:17:36.353 } 00:17:36.353 } 00:17:36.353 ]' 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.353 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:36.614 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:37.185 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.447 18:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.447 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.708 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.970 { 00:17:37.970 "cntlid": 39, 00:17:37.970 "qid": 0, 00:17:37.970 "state": "enabled", 00:17:37.970 "thread": "nvmf_tgt_poll_group_000", 00:17:37.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.970 "listen_address": { 00:17:37.970 "trtype": "TCP", 00:17:37.970 "adrfam": "IPv4", 00:17:37.970 "traddr": "10.0.0.2", 00:17:37.970 "trsvcid": "4420" 00:17:37.970 }, 00:17:37.970 "peer_address": { 00:17:37.970 "trtype": "TCP", 00:17:37.970 "adrfam": "IPv4", 00:17:37.970 "traddr": "10.0.0.1", 00:17:37.970 "trsvcid": "50106" 00:17:37.970 }, 00:17:37.970 "auth": { 00:17:37.970 "state": "completed", 00:17:37.970 "digest": "sha256", 00:17:37.970 "dhgroup": "ffdhe6144" 00:17:37.970 } 00:17:37.970 } 00:17:37.970 ]' 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.970 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:38.232 18:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:39.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.176 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.177 18:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.748 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.748 { 00:17:39.748 "cntlid": 41, 00:17:39.748 "qid": 0, 00:17:39.748 "state": "enabled", 00:17:39.748 "thread": "nvmf_tgt_poll_group_000", 00:17:39.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.748 "listen_address": { 00:17:39.748 "trtype": "TCP", 00:17:39.748 "adrfam": "IPv4", 00:17:39.748 "traddr": "10.0.0.2", 00:17:39.748 "trsvcid": "4420" 00:17:39.748 }, 00:17:39.748 "peer_address": { 00:17:39.748 "trtype": "TCP", 00:17:39.748 "adrfam": "IPv4", 00:17:39.748 "traddr": "10.0.0.1", 00:17:39.748 "trsvcid": "41822" 00:17:39.748 }, 00:17:39.748 "auth": { 00:17:39.748 "state": "completed", 00:17:39.748 "digest": "sha256", 00:17:39.748 "dhgroup": "ffdhe8192" 00:17:39.748 } 00:17:39.748 } 00:17:39.748 ]' 00:17:39.748 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.008 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.269 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:40.269 18:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:40.839 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:41.099 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.100 18:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.360 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.621 { 00:17:41.621 "cntlid": 43, 00:17:41.621 "qid": 0, 00:17:41.621 "state": "enabled", 00:17:41.621 "thread": "nvmf_tgt_poll_group_000", 00:17:41.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.621 "listen_address": { 00:17:41.621 "trtype": "TCP", 00:17:41.621 "adrfam": "IPv4", 00:17:41.621 "traddr": "10.0.0.2", 00:17:41.621 "trsvcid": "4420" 00:17:41.621 }, 00:17:41.621 "peer_address": { 00:17:41.621 "trtype": "TCP", 00:17:41.621 "adrfam": "IPv4", 00:17:41.621 "traddr": "10.0.0.1", 00:17:41.621 "trsvcid": "41852" 00:17:41.621 }, 00:17:41.621 "auth": { 00:17:41.621 "state": "completed", 00:17:41.621 "digest": "sha256", 00:17:41.621 "dhgroup": "ffdhe8192" 00:17:41.621 } 00:17:41.621 } 00:17:41.621 ]' 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.621 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:41.882 18:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.823 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.396 00:17:43.396 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.396 18:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.396 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.396 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.396 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.396 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.396 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.657 { 00:17:43.657 "cntlid": 45, 00:17:43.657 "qid": 0, 00:17:43.657 "state": "enabled", 00:17:43.657 "thread": "nvmf_tgt_poll_group_000", 00:17:43.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.657 "listen_address": { 00:17:43.657 "trtype": "TCP", 00:17:43.657 "adrfam": "IPv4", 00:17:43.657 "traddr": "10.0.0.2", 00:17:43.657 "trsvcid": "4420" 00:17:43.657 }, 00:17:43.657 "peer_address": { 00:17:43.657 "trtype": "TCP", 00:17:43.657 "adrfam": "IPv4", 00:17:43.657 "traddr": "10.0.0.1", 00:17:43.657 "trsvcid": "41872" 00:17:43.657 }, 00:17:43.657 "auth": { 00:17:43.657 "state": "completed", 00:17:43.657 "digest": "sha256", 00:17:43.657 "dhgroup": "ffdhe8192" 00:17:43.657 } 00:17:43.657 } 00:17:43.657 ]' 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.657 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.917 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:43.917 18:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.487 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.746 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.315 00:17:45.315 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.315 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.315 18:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.315 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.315 { 00:17:45.315 "cntlid": 47, 00:17:45.315 "qid": 0, 00:17:45.315 "state": "enabled", 00:17:45.315 "thread": "nvmf_tgt_poll_group_000", 00:17:45.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:45.315 "listen_address": { 00:17:45.315 "trtype": "TCP", 00:17:45.315 "adrfam": "IPv4", 00:17:45.316 "traddr": "10.0.0.2", 00:17:45.316 "trsvcid": "4420" 00:17:45.316 }, 00:17:45.316 "peer_address": { 00:17:45.316 "trtype": "TCP", 00:17:45.316 "adrfam": "IPv4", 00:17:45.316 "traddr": "10.0.0.1", 00:17:45.316 "trsvcid": "41916" 00:17:45.316 }, 00:17:45.316 "auth": { 00:17:45.316 "state": "completed", 00:17:45.316 "digest": "sha256", 00:17:45.316 "dhgroup": "ffdhe8192" 00:17:45.316 } 00:17:45.316 } 00:17:45.316 ]' 00:17:45.316 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.316 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.316 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:45.576 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:46.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.517 18:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.517 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.518 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.518 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.779 00:17:46.779 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.779 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.779 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.040 { 00:17:47.040 "cntlid": 49, 00:17:47.040 "qid": 0, 00:17:47.040 "state": "enabled", 00:17:47.040 "thread": "nvmf_tgt_poll_group_000", 00:17:47.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.040 "listen_address": { 00:17:47.040 "trtype": "TCP", 00:17:47.040 "adrfam": "IPv4", 00:17:47.040 "traddr": "10.0.0.2", 00:17:47.040 "trsvcid": "4420" 00:17:47.040 }, 00:17:47.040 "peer_address": { 00:17:47.040 "trtype": "TCP", 00:17:47.040 "adrfam": "IPv4", 00:17:47.040 "traddr": "10.0.0.1", 00:17:47.040 "trsvcid": "41948" 00:17:47.040 }, 00:17:47.040 "auth": { 00:17:47.040 "state": "completed", 00:17:47.040 "digest": "sha384", 00:17:47.040 "dhgroup": "null" 00:17:47.040 } 00:17:47.040 } 00:17:47.040 ]' 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.040 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.301 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:47.301 18:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:47.871 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.132 18:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.392 00:17:48.393 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.393 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.393 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.653 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.653 { 00:17:48.653 "cntlid": 51, 00:17:48.653 "qid": 0, 00:17:48.654 "state": "enabled", 00:17:48.654 "thread": "nvmf_tgt_poll_group_000", 00:17:48.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.654 "listen_address": { 00:17:48.654 "trtype": "TCP", 00:17:48.654 "adrfam": "IPv4", 00:17:48.654 "traddr": "10.0.0.2", 00:17:48.654 "trsvcid": "4420" 00:17:48.654 }, 00:17:48.654 "peer_address": { 00:17:48.654 "trtype": "TCP", 00:17:48.654 "adrfam": "IPv4", 00:17:48.654 "traddr": "10.0.0.1", 00:17:48.654 "trsvcid": "41970" 00:17:48.654 }, 00:17:48.654 "auth": { 00:17:48.654 "state": "completed", 00:17:48.654 "digest": "sha384", 00:17:48.654 "dhgroup": "null" 00:17:48.654 } 00:17:48.654 } 00:17:48.654 ]' 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.654 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.914 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:48.914 18:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:49.483 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.484 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.484 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.484 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.744 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.026 00:17:50.026 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.026 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.026 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.286 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.286 { 00:17:50.286 "cntlid": 53, 00:17:50.287 "qid": 0, 00:17:50.287 "state": "enabled", 00:17:50.287 "thread": "nvmf_tgt_poll_group_000", 00:17:50.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:50.287 "listen_address": { 00:17:50.287 "trtype": "TCP", 00:17:50.287 "adrfam": "IPv4", 00:17:50.287 "traddr": "10.0.0.2", 00:17:50.287 "trsvcid": "4420" 00:17:50.287 }, 00:17:50.287 "peer_address": { 00:17:50.287 "trtype": "TCP", 00:17:50.287 "adrfam": "IPv4", 00:17:50.287 "traddr": "10.0.0.1", 00:17:50.287 "trsvcid": "34912" 00:17:50.287 }, 00:17:50.287 "auth": { 00:17:50.287 "state": "completed", 00:17:50.287 "digest": "sha384", 00:17:50.287 "dhgroup": "null" 00:17:50.287 } 00:17:50.287 } 00:17:50.287 ]' 00:17:50.287 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.287 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:50.287 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.287 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:50.287 18:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.287 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.287 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.287 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.546 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:50.546 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:51.115 18:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.375 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.634 00:17:51.634 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.634 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.634 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.894 { 00:17:51.894 "cntlid": 55, 00:17:51.894 "qid": 0, 00:17:51.894 "state": "enabled", 00:17:51.894 "thread": "nvmf_tgt_poll_group_000", 00:17:51.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.894 "listen_address": { 00:17:51.894 "trtype": "TCP", 00:17:51.894 "adrfam": "IPv4", 00:17:51.894 "traddr": "10.0.0.2", 00:17:51.894 "trsvcid": "4420" 00:17:51.894 }, 00:17:51.894 "peer_address": { 00:17:51.894 "trtype": "TCP", 00:17:51.894 "adrfam": "IPv4", 00:17:51.894 "traddr": "10.0.0.1", 00:17:51.894 "trsvcid": "34932" 00:17:51.894 }, 00:17:51.894 "auth": { 00:17:51.894 "state": "completed", 00:17:51.894 "digest": "sha384", 00:17:51.894 "dhgroup": "null" 00:17:51.894 } 00:17:51.894 } 00:17:51.894 ]' 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.894 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.153 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:52.153 18:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.722 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.982 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.242 00:17:53.242 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.242 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.242 18:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.503 { 00:17:53.503 "cntlid": 57, 00:17:53.503 "qid": 0, 00:17:53.503 "state": "enabled", 00:17:53.503 "thread": "nvmf_tgt_poll_group_000", 00:17:53.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.503 "listen_address": { 00:17:53.503 "trtype": "TCP", 00:17:53.503 "adrfam": "IPv4", 00:17:53.503 "traddr": "10.0.0.2", 00:17:53.503 "trsvcid": "4420" 00:17:53.503 }, 00:17:53.503 "peer_address": { 00:17:53.503 "trtype": "TCP", 00:17:53.503 "adrfam": "IPv4", 00:17:53.503 "traddr": "10.0.0.1", 00:17:53.503 "trsvcid": "34966" 00:17:53.503 }, 00:17:53.503 "auth": { 00:17:53.503 "state": "completed", 00:17:53.503 "digest": "sha384", 00:17:53.503 "dhgroup": "ffdhe2048" 00:17:53.503 } 00:17:53.503 } 00:17:53.503 ]' 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.503 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.764 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:53.764 18:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:17:54.335 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.335 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.335 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.335 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.336 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.336 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.336 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.336 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.596 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.856 00:17:54.856 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.856 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.856 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.117 { 00:17:55.117 "cntlid": 59, 00:17:55.117 "qid": 0, 00:17:55.117 "state": "enabled", 00:17:55.117 "thread": "nvmf_tgt_poll_group_000", 00:17:55.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:55.117 "listen_address": { 00:17:55.117 "trtype": "TCP", 00:17:55.117 "adrfam": "IPv4", 00:17:55.117 "traddr": "10.0.0.2", 00:17:55.117 "trsvcid": "4420" 00:17:55.117 }, 00:17:55.117 "peer_address": { 00:17:55.117 "trtype": "TCP", 00:17:55.117 "adrfam": "IPv4", 00:17:55.117 "traddr": "10.0.0.1", 00:17:55.117 "trsvcid": "34998" 00:17:55.117 }, 00:17:55.117 "auth": { 00:17:55.117 "state": "completed", 00:17:55.117 "digest": "sha384", 00:17:55.117 "dhgroup": "ffdhe2048" 00:17:55.117 } 00:17:55.117 } 00:17:55.117 ]' 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.117 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.378 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:55.378 18:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.950 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:56.210 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:56.210 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.211 18:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.471 00:17:56.471 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.471 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.472 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.733 { 00:17:56.733 "cntlid": 61, 00:17:56.733 "qid": 0, 00:17:56.733 "state": "enabled", 00:17:56.733 "thread": "nvmf_tgt_poll_group_000", 00:17:56.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.733 "listen_address": { 00:17:56.733 "trtype": "TCP", 00:17:56.733 "adrfam": "IPv4", 00:17:56.733 "traddr": "10.0.0.2", 00:17:56.733 "trsvcid": "4420" 00:17:56.733 }, 00:17:56.733 "peer_address": { 00:17:56.733 "trtype": "TCP", 00:17:56.733 "adrfam": "IPv4", 00:17:56.733 "traddr": "10.0.0.1", 00:17:56.733 "trsvcid": "35018" 00:17:56.733 }, 00:17:56.733 "auth": { 00:17:56.733 "state": "completed", 00:17:56.733 "digest": "sha384", 00:17:56.733 "dhgroup": "ffdhe2048" 00:17:56.733 } 00:17:56.733 } 00:17:56.733 ]' 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.733 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.994 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:56.995 18:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:17:57.565 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.566 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.825 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.086 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.086 { 00:17:58.086 "cntlid": 63, 00:17:58.086 "qid": 0, 00:17:58.086 "state": "enabled", 00:17:58.086 "thread": "nvmf_tgt_poll_group_000", 00:17:58.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.086 "listen_address": { 00:17:58.086 "trtype": "TCP", 00:17:58.086 "adrfam": "IPv4", 00:17:58.086 "traddr": "10.0.0.2", 00:17:58.086 "trsvcid": "4420" 00:17:58.086 }, 00:17:58.086 "peer_address": { 00:17:58.086 "trtype": "TCP", 00:17:58.086 "adrfam": "IPv4", 00:17:58.086 "traddr": "10.0.0.1", 00:17:58.086 "trsvcid": "35054" 00:17:58.086 }, 00:17:58.086 "auth": { 00:17:58.086 "state": "completed", 00:17:58.086 "digest": "sha384", 00:17:58.086 "dhgroup": "ffdhe2048" 00:17:58.086 } 00:17:58.086 } 00:17:58.086 ]' 00:17:58.086 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.346 18:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.607 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:58.607 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:59.179 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.439 18:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.439 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.439 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.439 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.439 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.700 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.700 { 00:17:59.700 "cntlid": 65, 00:17:59.700 "qid": 0, 00:17:59.700 "state": "enabled", 00:17:59.700 "thread": "nvmf_tgt_poll_group_000", 00:17:59.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.700 "listen_address": { 00:17:59.700 "trtype": "TCP", 00:17:59.700 "adrfam": "IPv4", 00:17:59.700 "traddr": "10.0.0.2", 00:17:59.700 "trsvcid": "4420" 00:17:59.700 }, 00:17:59.700 "peer_address": { 00:17:59.700 "trtype": "TCP", 00:17:59.700 "adrfam": "IPv4", 00:17:59.700 "traddr": "10.0.0.1", 00:17:59.700 "trsvcid": "49600" 00:17:59.700 }, 00:17:59.700 "auth": { 00:17:59.700 "state": "completed", 00:17:59.700 "digest": "sha384", 00:17:59.700 "dhgroup": "ffdhe3072" 00:17:59.700 } 00:17:59.700 } 00:17:59.700 ]' 00:17:59.700 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.960 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.221 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:00.221 18:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.796 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.056 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.317 00:18:01.317 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.317 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.317 18:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.317 { 00:18:01.317 "cntlid": 67, 00:18:01.317 "qid": 0, 00:18:01.317 "state": "enabled", 00:18:01.317 "thread": "nvmf_tgt_poll_group_000", 00:18:01.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.317 "listen_address": { 00:18:01.317 "trtype": "TCP", 00:18:01.317 "adrfam": "IPv4", 00:18:01.317 "traddr": "10.0.0.2", 00:18:01.317 "trsvcid": "4420" 00:18:01.317 }, 00:18:01.317 "peer_address": { 00:18:01.317 "trtype": "TCP", 00:18:01.317 "adrfam": "IPv4", 00:18:01.317 "traddr": "10.0.0.1", 00:18:01.317 "trsvcid": "49636" 00:18:01.317 }, 00:18:01.317 "auth": { 00:18:01.317 "state": "completed", 00:18:01.317 "digest": "sha384", 00:18:01.317 "dhgroup": "ffdhe3072" 00:18:01.317 } 00:18:01.317 } 00:18:01.317 ]' 00:18:01.317 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.578 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.838 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:01.838 18:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.410 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.671 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.932 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.932 { 00:18:02.932 "cntlid": 69, 00:18:02.932 "qid": 0, 00:18:02.932 "state": "enabled", 00:18:02.932 "thread": "nvmf_tgt_poll_group_000", 00:18:02.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.932 "listen_address": { 00:18:02.932 "trtype": "TCP", 00:18:02.932 "adrfam": "IPv4", 00:18:02.932 "traddr": "10.0.0.2", 00:18:02.932 "trsvcid": "4420" 00:18:02.932 }, 00:18:02.932 "peer_address": { 00:18:02.932 "trtype": "TCP", 00:18:02.932 "adrfam": "IPv4", 00:18:02.932 "traddr": "10.0.0.1", 00:18:02.932 "trsvcid": "49650" 00:18:02.932 }, 00:18:02.932 "auth": { 00:18:02.932 "state": "completed", 00:18:02.932 "digest": "sha384", 00:18:02.932 "dhgroup": "ffdhe3072" 00:18:02.932 } 00:18:02.932 } 00:18:02.932 ]' 00:18:02.932 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.193 18:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.454 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:03.454 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.024 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.284 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.285 18:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.545 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.545 { 00:18:04.545 "cntlid": 71, 00:18:04.545 "qid": 0, 00:18:04.545 "state": "enabled", 00:18:04.545 "thread": "nvmf_tgt_poll_group_000", 00:18:04.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.545 "listen_address": { 00:18:04.545 "trtype": "TCP", 00:18:04.545 "adrfam": "IPv4", 00:18:04.545 "traddr": "10.0.0.2", 00:18:04.545 "trsvcid": "4420" 00:18:04.545 }, 00:18:04.545 "peer_address": { 00:18:04.545 "trtype": "TCP", 00:18:04.545 "adrfam": "IPv4", 00:18:04.545 "traddr": "10.0.0.1", 00:18:04.545 "trsvcid": "49672" 00:18:04.545 }, 00:18:04.545 "auth": { 00:18:04.545 "state": "completed", 00:18:04.545 "digest": "sha384", 00:18:04.545 "dhgroup": "ffdhe3072" 00:18:04.545 } 00:18:04.545 } 00:18:04.545 ]' 00:18:04.545 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.806 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.067 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:05.068 18:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.641 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.903 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.164 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.164 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.164 { 00:18:06.164 "cntlid": 73, 00:18:06.164 "qid": 0, 00:18:06.164 "state": "enabled", 00:18:06.164 "thread": "nvmf_tgt_poll_group_000", 00:18:06.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.164 "listen_address": { 00:18:06.164 "trtype": "TCP", 00:18:06.164 "adrfam": "IPv4", 00:18:06.164 "traddr": "10.0.0.2", 00:18:06.164 "trsvcid": "4420" 00:18:06.164 }, 00:18:06.164 "peer_address": { 00:18:06.164 "trtype": "TCP", 00:18:06.164 "adrfam": "IPv4", 00:18:06.164 "traddr": "10.0.0.1", 00:18:06.164 "trsvcid": "49708" 00:18:06.164 }, 00:18:06.164 "auth": { 00:18:06.164 "state": "completed", 00:18:06.164 "digest": "sha384", 00:18:06.164 "dhgroup": "ffdhe4096" 00:18:06.164 } 00:18:06.164 } 00:18:06.164 ]' 00:18:06.426 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.426 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.426 18:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.426 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.426 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.426 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.426 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.426 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.686 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:06.686 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.256 18:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.517 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.777 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.777 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.037 { 00:18:08.037 "cntlid": 75, 00:18:08.037 "qid": 0, 00:18:08.037 "state": "enabled", 00:18:08.037 "thread": "nvmf_tgt_poll_group_000", 00:18:08.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.037 "listen_address": { 00:18:08.037 "trtype": "TCP", 00:18:08.037 "adrfam": "IPv4", 00:18:08.037 "traddr": "10.0.0.2", 00:18:08.037 "trsvcid": "4420" 00:18:08.037 }, 00:18:08.037 "peer_address": { 00:18:08.037 "trtype": "TCP", 00:18:08.037 "adrfam": "IPv4", 00:18:08.037 "traddr": "10.0.0.1", 00:18:08.037 "trsvcid": "49736" 00:18:08.037 }, 00:18:08.037 "auth": { 00:18:08.037 "state": "completed", 00:18:08.037 "digest": "sha384", 00:18:08.037 "dhgroup": "ffdhe4096" 00:18:08.037 } 00:18:08.037 } 00:18:08.037 ]' 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.037 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.325 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:08.325 18:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:08.977 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.977 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.978 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.239 00:18:09.239 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.239 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.239 18:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.498 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.498 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.499 { 00:18:09.499 "cntlid": 77, 00:18:09.499 "qid": 0, 00:18:09.499 "state": "enabled", 00:18:09.499 "thread": "nvmf_tgt_poll_group_000", 00:18:09.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.499 "listen_address": { 00:18:09.499 "trtype": "TCP", 00:18:09.499 "adrfam": "IPv4", 00:18:09.499 "traddr": "10.0.0.2", 00:18:09.499 "trsvcid": "4420" 00:18:09.499 }, 00:18:09.499 "peer_address": { 00:18:09.499 "trtype": "TCP", 00:18:09.499 "adrfam": "IPv4", 00:18:09.499 "traddr": "10.0.0.1", 00:18:09.499 "trsvcid": "33358" 00:18:09.499 }, 00:18:09.499 "auth": { 00:18:09.499 "state": "completed", 00:18:09.499 "digest": "sha384", 00:18:09.499 "dhgroup": "ffdhe4096" 00:18:09.499 } 00:18:09.499 } 00:18:09.499 ]' 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.499 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.759 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.759 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.759 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.759 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:09.759 18:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.697 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.956 00:18:10.956 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.956 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.956 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.215 { 00:18:11.215 "cntlid": 79, 00:18:11.215 "qid": 0, 00:18:11.215 "state": "enabled", 00:18:11.215 "thread": "nvmf_tgt_poll_group_000", 00:18:11.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.215 "listen_address": { 00:18:11.215 "trtype": "TCP", 00:18:11.215 "adrfam": "IPv4", 00:18:11.215 "traddr": "10.0.0.2", 00:18:11.215 "trsvcid": "4420" 00:18:11.215 }, 00:18:11.215 "peer_address": { 00:18:11.215 "trtype": "TCP", 00:18:11.215 "adrfam": "IPv4", 00:18:11.215 "traddr": "10.0.0.1", 00:18:11.215 "trsvcid": "33396" 00:18:11.215 }, 00:18:11.215 "auth": { 00:18:11.215 "state": "completed", 00:18:11.215 "digest": "sha384", 00:18:11.215 "dhgroup": "ffdhe4096" 00:18:11.215 } 00:18:11.215 } 00:18:11.215 ]' 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.215 18:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.474 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:11.474 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:12.042 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.302 18:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.562 00:18:12.562 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.562 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.562 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.823 { 00:18:12.823 "cntlid": 81, 00:18:12.823 "qid": 0, 00:18:12.823 "state": "enabled", 00:18:12.823 "thread": "nvmf_tgt_poll_group_000", 00:18:12.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.823 "listen_address": { 00:18:12.823 "trtype": "TCP", 00:18:12.823 "adrfam": "IPv4", 00:18:12.823 "traddr": "10.0.0.2", 00:18:12.823 "trsvcid": "4420" 00:18:12.823 }, 00:18:12.823 "peer_address": { 00:18:12.823 "trtype": "TCP", 00:18:12.823 "adrfam": "IPv4", 00:18:12.823 "traddr": "10.0.0.1", 00:18:12.823 "trsvcid": "33434" 00:18:12.823 }, 00:18:12.823 "auth": { 00:18:12.823 "state": "completed", 00:18:12.823 "digest": "sha384", 00:18:12.823 "dhgroup": "ffdhe6144" 00:18:12.823 } 00:18:12.823 } 00:18:12.823 ]' 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.823 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.824 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.824 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.085 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:13.085 18:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:13.657 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.919 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.205 00:18:14.206 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.206 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.206 18:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.468 { 00:18:14.468 "cntlid": 83, 00:18:14.468 "qid": 0, 00:18:14.468 "state": "enabled", 00:18:14.468 "thread": "nvmf_tgt_poll_group_000", 00:18:14.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.468 "listen_address": { 00:18:14.468 "trtype": "TCP", 00:18:14.468 "adrfam": "IPv4", 00:18:14.468 "traddr": "10.0.0.2", 00:18:14.468 "trsvcid": "4420" 00:18:14.468 }, 00:18:14.468 "peer_address": { 00:18:14.468 "trtype": "TCP", 00:18:14.468 "adrfam": "IPv4", 00:18:14.468 "traddr": "10.0.0.1", 00:18:14.468 "trsvcid": "33458" 00:18:14.468 }, 00:18:14.468 "auth": { 00:18:14.468 "state": "completed", 00:18:14.468 "digest": "sha384", 00:18:14.468 "dhgroup": "ffdhe6144" 00:18:14.468 } 00:18:14.468 } 00:18:14.468 ]' 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.468 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:14.729 18:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.672 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.932 00:18:15.932 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.932 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.932 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.194 { 00:18:16.194 "cntlid": 85, 00:18:16.194 "qid": 0, 00:18:16.194 "state": "enabled", 00:18:16.194 "thread": "nvmf_tgt_poll_group_000", 00:18:16.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.194 "listen_address": { 00:18:16.194 "trtype": "TCP", 00:18:16.194 "adrfam": "IPv4", 00:18:16.194 "traddr": "10.0.0.2", 00:18:16.194 "trsvcid": "4420" 00:18:16.194 }, 00:18:16.194 "peer_address": { 00:18:16.194 "trtype": "TCP", 00:18:16.194 "adrfam": "IPv4", 00:18:16.194 "traddr": "10.0.0.1", 00:18:16.194 "trsvcid": "33488" 00:18:16.194 }, 00:18:16.194 "auth": { 00:18:16.194 "state": "completed", 00:18:16.194 "digest": "sha384", 00:18:16.194 "dhgroup": "ffdhe6144" 00:18:16.194 } 00:18:16.194 } 00:18:16.194 ]' 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.194 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.454 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:16.454 18:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.454 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.454 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.454 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.455 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:16.455 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.397 18:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.397 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.658 00:18:17.658 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.658 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.658 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.921 { 00:18:17.921 "cntlid": 87, 00:18:17.921 "qid": 0, 00:18:17.921 "state": "enabled", 00:18:17.921 "thread": "nvmf_tgt_poll_group_000", 00:18:17.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.921 "listen_address": { 00:18:17.921 "trtype": "TCP", 00:18:17.921 "adrfam": "IPv4", 00:18:17.921 "traddr": "10.0.0.2", 00:18:17.921 "trsvcid": "4420" 00:18:17.921 }, 00:18:17.921 "peer_address": { 00:18:17.921 "trtype": "TCP", 00:18:17.921 "adrfam": "IPv4", 00:18:17.921 "traddr": "10.0.0.1", 00:18:17.921 "trsvcid": "33516" 00:18:17.921 }, 00:18:17.921 "auth": { 00:18:17.921 "state": "completed", 00:18:17.921 "digest": "sha384", 00:18:17.921 "dhgroup": "ffdhe6144" 00:18:17.921 } 00:18:17.921 } 00:18:17.921 ]' 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.921 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.184 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.184 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.184 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.184 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:18.184 18:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:18.756 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.017 18:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.588 00:18:19.588 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.588 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.588 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.848 { 00:18:19.848 "cntlid": 89, 00:18:19.848 "qid": 0, 00:18:19.848 "state": "enabled", 00:18:19.848 "thread": "nvmf_tgt_poll_group_000", 00:18:19.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.848 "listen_address": { 00:18:19.848 "trtype": "TCP", 00:18:19.848 "adrfam": "IPv4", 00:18:19.848 "traddr": "10.0.0.2", 00:18:19.848 "trsvcid": "4420" 00:18:19.848 }, 00:18:19.848 "peer_address": { 00:18:19.848 "trtype": "TCP", 00:18:19.848 "adrfam": "IPv4", 00:18:19.848 "traddr": "10.0.0.1", 00:18:19.848 "trsvcid": "38684" 00:18:19.848 }, 00:18:19.848 "auth": { 00:18:19.848 "state": "completed", 00:18:19.848 "digest": "sha384", 00:18:19.848 "dhgroup": "ffdhe8192" 00:18:19.848 } 00:18:19.848 } 00:18:19.848 ]' 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.848 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.107 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:20.107 18:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.677 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.936 18:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.504 00:18:21.504 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.504 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.504 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.763 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.763 { 00:18:21.763 "cntlid": 91, 00:18:21.764 "qid": 0, 00:18:21.764 "state": "enabled", 00:18:21.764 "thread": "nvmf_tgt_poll_group_000", 00:18:21.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.764 "listen_address": { 00:18:21.764 "trtype": "TCP", 00:18:21.764 "adrfam": "IPv4", 00:18:21.764 "traddr": "10.0.0.2", 00:18:21.764 "trsvcid": "4420" 00:18:21.764 }, 00:18:21.764 "peer_address": { 00:18:21.764 "trtype": "TCP", 00:18:21.764 "adrfam": "IPv4", 00:18:21.764 "traddr": "10.0.0.1", 00:18:21.764 "trsvcid": "38704" 00:18:21.764 }, 00:18:21.764 "auth": { 00:18:21.764 "state": "completed", 00:18:21.764 "digest": "sha384", 00:18:21.764 "dhgroup": "ffdhe8192" 00:18:21.764 } 00:18:21.764 } 00:18:21.764 ]' 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.764 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.023 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:22.023 18:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.591 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.850 18:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.420 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.420 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.681 { 00:18:23.681 "cntlid": 93, 00:18:23.681 "qid": 0, 00:18:23.681 "state": "enabled", 00:18:23.681 "thread": "nvmf_tgt_poll_group_000", 00:18:23.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.681 "listen_address": { 00:18:23.681 "trtype": "TCP", 00:18:23.681 "adrfam": "IPv4", 00:18:23.681 "traddr": "10.0.0.2", 00:18:23.681 "trsvcid": "4420" 00:18:23.681 }, 00:18:23.681 "peer_address": { 00:18:23.681 "trtype": "TCP", 00:18:23.681 "adrfam": "IPv4", 00:18:23.681 "traddr": "10.0.0.1", 00:18:23.681 "trsvcid": "38724" 00:18:23.681 }, 00:18:23.681 "auth": { 00:18:23.681 "state": "completed", 00:18:23.681 "digest": "sha384", 00:18:23.681 "dhgroup": "ffdhe8192" 00:18:23.681 } 00:18:23.681 } 00:18:23.681 ]' 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.681 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.943 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:23.943 18:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:24.514 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.514 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.515 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.784 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.354 00:18:25.354 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.354 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.354 18:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.354 { 00:18:25.354 "cntlid": 95, 00:18:25.354 "qid": 0, 00:18:25.354 "state": "enabled", 00:18:25.354 "thread": "nvmf_tgt_poll_group_000", 00:18:25.354 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.354 "listen_address": { 00:18:25.354 "trtype": "TCP", 00:18:25.354 "adrfam": "IPv4", 00:18:25.354 "traddr": "10.0.0.2", 00:18:25.354 "trsvcid": "4420" 00:18:25.354 }, 00:18:25.354 "peer_address": { 00:18:25.354 "trtype": "TCP", 00:18:25.354 "adrfam": "IPv4", 00:18:25.354 "traddr": "10.0.0.1", 00:18:25.354 "trsvcid": "38744" 00:18:25.354 }, 00:18:25.354 "auth": { 00:18:25.354 "state": "completed", 00:18:25.354 "digest": "sha384", 00:18:25.354 "dhgroup": "ffdhe8192" 00:18:25.354 } 00:18:25.354 } 00:18:25.354 ]' 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.354 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.614 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.614 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.614 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.614 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.614 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.615 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.874 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:25.874 18:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.444 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.704 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.964 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.964 { 00:18:26.964 "cntlid": 97, 00:18:26.964 "qid": 0, 00:18:26.964 "state": "enabled", 00:18:26.964 "thread": "nvmf_tgt_poll_group_000", 00:18:26.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.964 "listen_address": { 00:18:26.964 "trtype": "TCP", 00:18:26.964 "adrfam": "IPv4", 00:18:26.964 "traddr": "10.0.0.2", 00:18:26.964 "trsvcid": "4420" 00:18:26.964 }, 00:18:26.964 "peer_address": { 00:18:26.964 "trtype": "TCP", 00:18:26.964 "adrfam": "IPv4", 00:18:26.964 "traddr": "10.0.0.1", 00:18:26.964 "trsvcid": "38772" 00:18:26.964 }, 00:18:26.964 "auth": { 00:18:26.964 "state": "completed", 00:18:26.964 "digest": "sha512", 00:18:26.964 "dhgroup": "null" 00:18:26.964 } 00:18:26.964 } 00:18:26.964 ]' 00:18:26.964 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.223 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.224 18:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.484 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:27.484 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:28.054 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:28.314 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:28.314 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.315 18:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.575 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.575 { 00:18:28.575 "cntlid": 99, 00:18:28.575 "qid": 0, 00:18:28.575 "state": "enabled", 00:18:28.575 "thread": "nvmf_tgt_poll_group_000", 00:18:28.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.575 "listen_address": { 00:18:28.575 "trtype": "TCP", 00:18:28.575 "adrfam": "IPv4", 00:18:28.575 "traddr": "10.0.0.2", 00:18:28.575 "trsvcid": "4420" 00:18:28.575 }, 00:18:28.575 "peer_address": { 00:18:28.575 "trtype": "TCP", 00:18:28.575 "adrfam": "IPv4", 00:18:28.575 "traddr": "10.0.0.1", 00:18:28.575 "trsvcid": "38792" 00:18:28.575 }, 00:18:28.575 "auth": { 00:18:28.575 "state": "completed", 00:18:28.575 "digest": "sha512", 00:18:28.575 "dhgroup": "null" 00:18:28.575 } 00:18:28.575 } 00:18:28.575 ]' 00:18:28.575 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.835 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.094 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:29.094 18:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.666 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.926 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.186 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.187 { 00:18:30.187 "cntlid": 101, 00:18:30.187 "qid": 0, 00:18:30.187 "state": "enabled", 00:18:30.187 "thread": "nvmf_tgt_poll_group_000", 00:18:30.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:30.187 "listen_address": { 00:18:30.187 "trtype": "TCP", 00:18:30.187 "adrfam": "IPv4", 00:18:30.187 "traddr": "10.0.0.2", 00:18:30.187 "trsvcid": "4420" 00:18:30.187 }, 00:18:30.187 "peer_address": { 00:18:30.187 "trtype": "TCP", 00:18:30.187 "adrfam": "IPv4", 00:18:30.187 "traddr": "10.0.0.1", 00:18:30.187 "trsvcid": "48376" 00:18:30.187 }, 00:18:30.187 "auth": { 00:18:30.187 "state": "completed", 00:18:30.187 "digest": "sha512", 00:18:30.187 "dhgroup": "null" 00:18:30.187 } 00:18:30.187 } 00:18:30.187 ]' 00:18:30.187 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.447 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.447 18:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.447 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.447 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.447 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.447 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.447 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.706 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:30.706 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.275 18:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.534 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:31.534 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.534 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.534 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.534 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.535 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.794 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.794 { 00:18:31.794 "cntlid": 103, 00:18:31.794 "qid": 0, 00:18:31.794 "state": "enabled", 00:18:31.794 "thread": "nvmf_tgt_poll_group_000", 00:18:31.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.794 "listen_address": { 00:18:31.794 "trtype": "TCP", 00:18:31.794 "adrfam": "IPv4", 00:18:31.794 "traddr": "10.0.0.2", 00:18:31.794 "trsvcid": "4420" 00:18:31.794 }, 00:18:31.794 "peer_address": { 00:18:31.794 "trtype": "TCP", 00:18:31.794 "adrfam": "IPv4", 00:18:31.794 "traddr": "10.0.0.1", 00:18:31.794 "trsvcid": "48410" 00:18:31.794 }, 00:18:31.794 "auth": { 00:18:31.794 "state": "completed", 00:18:31.794 "digest": "sha512", 00:18:31.794 "dhgroup": "null" 00:18:31.794 } 00:18:31.794 } 00:18:31.794 ]' 00:18:31.794 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.053 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:32.054 18:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.001 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.264 00:18:33.264 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.264 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.264 18:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.524 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.524 { 00:18:33.524 "cntlid": 105, 00:18:33.524 "qid": 0, 00:18:33.524 "state": "enabled", 00:18:33.524 "thread": "nvmf_tgt_poll_group_000", 00:18:33.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.524 "listen_address": { 00:18:33.524 "trtype": "TCP", 00:18:33.525 "adrfam": "IPv4", 00:18:33.525 "traddr": "10.0.0.2", 00:18:33.525 "trsvcid": "4420" 00:18:33.525 }, 00:18:33.525 "peer_address": { 00:18:33.525 "trtype": "TCP", 00:18:33.525 "adrfam": "IPv4", 00:18:33.525 "traddr": "10.0.0.1", 00:18:33.525 "trsvcid": "48422" 00:18:33.525 }, 00:18:33.525 "auth": { 00:18:33.525 "state": "completed", 00:18:33.525 "digest": "sha512", 00:18:33.525 "dhgroup": "ffdhe2048" 00:18:33.525 } 00:18:33.525 } 00:18:33.525 ]' 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.525 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.785 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:33.785 18:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.357 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.619 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.881 00:18:34.881 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.881 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.881 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.143 { 00:18:35.143 "cntlid": 107, 00:18:35.143 "qid": 0, 00:18:35.143 "state": "enabled", 00:18:35.143 "thread": "nvmf_tgt_poll_group_000", 00:18:35.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.143 "listen_address": { 00:18:35.143 "trtype": "TCP", 00:18:35.143 "adrfam": "IPv4", 00:18:35.143 "traddr": "10.0.0.2", 00:18:35.143 "trsvcid": "4420" 00:18:35.143 }, 00:18:35.143 "peer_address": { 00:18:35.143 "trtype": "TCP", 00:18:35.143 "adrfam": "IPv4", 00:18:35.143 "traddr": "10.0.0.1", 00:18:35.143 "trsvcid": "48456" 00:18:35.143 }, 00:18:35.143 "auth": { 00:18:35.143 "state": "completed", 00:18:35.143 "digest": "sha512", 00:18:35.143 "dhgroup": "ffdhe2048" 00:18:35.143 } 00:18:35.143 } 00:18:35.143 ]' 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.143 18:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.404 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:35.404 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:35.974 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.974 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.974 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.975 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.975 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.975 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.975 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.975 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.235 18:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.495 00:18:36.495 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.495 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.495 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.756 { 00:18:36.756 "cntlid": 109, 00:18:36.756 "qid": 0, 00:18:36.756 "state": "enabled", 00:18:36.756 "thread": "nvmf_tgt_poll_group_000", 00:18:36.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.756 "listen_address": { 00:18:36.756 "trtype": "TCP", 00:18:36.756 "adrfam": "IPv4", 00:18:36.756 "traddr": "10.0.0.2", 00:18:36.756 "trsvcid": "4420" 00:18:36.756 }, 00:18:36.756 "peer_address": { 00:18:36.756 "trtype": "TCP", 00:18:36.756 "adrfam": "IPv4", 00:18:36.756 "traddr": "10.0.0.1", 00:18:36.756 "trsvcid": "48468" 00:18:36.756 }, 00:18:36.756 "auth": { 00:18:36.756 "state": "completed", 00:18:36.756 "digest": "sha512", 00:18:36.756 "dhgroup": "ffdhe2048" 00:18:36.756 } 00:18:36.756 } 00:18:36.756 ]' 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.756 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.017 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:37.017 18:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.588 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.849 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.109 00:18:38.109 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.109 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.109 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.371 { 00:18:38.371 "cntlid": 111, 00:18:38.371 "qid": 0, 00:18:38.371 "state": "enabled", 00:18:38.371 "thread": "nvmf_tgt_poll_group_000", 00:18:38.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:38.371 "listen_address": { 00:18:38.371 "trtype": "TCP", 00:18:38.371 "adrfam": "IPv4", 00:18:38.371 "traddr": "10.0.0.2", 00:18:38.371 "trsvcid": "4420" 00:18:38.371 }, 00:18:38.371 "peer_address": { 00:18:38.371 "trtype": "TCP", 00:18:38.371 "adrfam": "IPv4", 00:18:38.371 "traddr": "10.0.0.1", 00:18:38.371 "trsvcid": "48484" 00:18:38.371 }, 00:18:38.371 "auth": { 00:18:38.371 "state": "completed", 00:18:38.371 "digest": "sha512", 00:18:38.371 "dhgroup": "ffdhe2048" 00:18:38.371 } 00:18:38.371 } 00:18:38.371 ]' 00:18:38.371 18:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.371 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.633 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:38.633 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.203 18:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.464 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.724 00:18:39.724 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.724 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.724 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.985 { 00:18:39.985 "cntlid": 113, 00:18:39.985 "qid": 0, 00:18:39.985 "state": "enabled", 00:18:39.985 "thread": "nvmf_tgt_poll_group_000", 00:18:39.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.985 "listen_address": { 00:18:39.985 "trtype": "TCP", 00:18:39.985 "adrfam": "IPv4", 00:18:39.985 "traddr": "10.0.0.2", 00:18:39.985 "trsvcid": "4420" 00:18:39.985 }, 00:18:39.985 "peer_address": { 00:18:39.985 "trtype": "TCP", 00:18:39.985 "adrfam": "IPv4", 00:18:39.985 "traddr": "10.0.0.1", 00:18:39.985 "trsvcid": "38742" 00:18:39.985 }, 00:18:39.985 "auth": { 00:18:39.985 "state": "completed", 00:18:39.985 "digest": "sha512", 00:18:39.985 "dhgroup": "ffdhe3072" 00:18:39.985 } 00:18:39.985 } 00:18:39.985 ]' 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.985 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.246 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:40.246 18:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:40.815 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.074 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.075 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.075 18:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.334 00:18:41.334 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.334 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.334 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.593 { 00:18:41.593 "cntlid": 115, 00:18:41.593 "qid": 0, 00:18:41.593 "state": "enabled", 00:18:41.593 "thread": "nvmf_tgt_poll_group_000", 00:18:41.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.593 "listen_address": { 00:18:41.593 "trtype": "TCP", 00:18:41.593 "adrfam": "IPv4", 00:18:41.593 "traddr": "10.0.0.2", 00:18:41.593 "trsvcid": "4420" 00:18:41.593 }, 00:18:41.593 "peer_address": { 00:18:41.593 "trtype": "TCP", 00:18:41.593 "adrfam": "IPv4", 00:18:41.593 "traddr": "10.0.0.1", 00:18:41.593 "trsvcid": "38770" 00:18:41.593 }, 00:18:41.593 "auth": { 00:18:41.593 "state": "completed", 00:18:41.593 "digest": "sha512", 00:18:41.593 "dhgroup": "ffdhe3072" 00:18:41.593 } 00:18:41.593 } 00:18:41.593 ]' 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.593 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.594 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.594 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.854 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.854 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.854 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.854 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:41.854 18:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:42.424 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.685 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.686 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.686 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.946 00:18:42.946 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.946 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.946 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.206 { 00:18:43.206 "cntlid": 117, 00:18:43.206 "qid": 0, 00:18:43.206 "state": "enabled", 00:18:43.206 "thread": "nvmf_tgt_poll_group_000", 00:18:43.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.206 "listen_address": { 00:18:43.206 "trtype": "TCP", 00:18:43.206 "adrfam": "IPv4", 00:18:43.206 "traddr": "10.0.0.2", 00:18:43.206 "trsvcid": "4420" 00:18:43.206 }, 00:18:43.206 "peer_address": { 00:18:43.206 "trtype": "TCP", 00:18:43.206 "adrfam": "IPv4", 00:18:43.206 "traddr": "10.0.0.1", 00:18:43.206 "trsvcid": "38800" 00:18:43.206 }, 00:18:43.206 "auth": { 00:18:43.206 "state": "completed", 00:18:43.206 "digest": "sha512", 00:18:43.206 "dhgroup": "ffdhe3072" 00:18:43.206 } 00:18:43.206 } 00:18:43.206 ]' 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.206 18:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.466 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:43.466 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.037 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.297 18:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.297 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.297 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.297 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.557 00:18:44.557 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.557 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.557 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.818 { 00:18:44.818 "cntlid": 119, 00:18:44.818 "qid": 0, 00:18:44.818 "state": "enabled", 00:18:44.818 "thread": "nvmf_tgt_poll_group_000", 00:18:44.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.818 "listen_address": { 00:18:44.818 "trtype": "TCP", 00:18:44.818 "adrfam": "IPv4", 00:18:44.818 "traddr": "10.0.0.2", 00:18:44.818 "trsvcid": "4420" 00:18:44.818 }, 00:18:44.818 "peer_address": { 00:18:44.818 "trtype": "TCP", 00:18:44.818 "adrfam": "IPv4", 00:18:44.818 "traddr": "10.0.0.1", 00:18:44.818 "trsvcid": "38834" 00:18:44.818 }, 00:18:44.818 "auth": { 00:18:44.818 "state": "completed", 00:18:44.818 "digest": "sha512", 00:18:44.818 "dhgroup": "ffdhe3072" 00:18:44.818 } 00:18:44.818 } 00:18:44.818 ]' 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.818 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.078 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:45.079 18:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.649 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.910 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.173 00:18:46.173 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.173 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.173 18:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.450 { 00:18:46.450 "cntlid": 121, 00:18:46.450 "qid": 0, 00:18:46.450 "state": "enabled", 00:18:46.450 "thread": "nvmf_tgt_poll_group_000", 00:18:46.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.450 "listen_address": { 00:18:46.450 "trtype": "TCP", 00:18:46.450 "adrfam": "IPv4", 00:18:46.450 "traddr": "10.0.0.2", 00:18:46.450 "trsvcid": "4420" 00:18:46.450 }, 00:18:46.450 "peer_address": { 00:18:46.450 "trtype": "TCP", 00:18:46.450 "adrfam": "IPv4", 00:18:46.450 "traddr": "10.0.0.1", 00:18:46.450 "trsvcid": "38854" 00:18:46.450 }, 00:18:46.450 "auth": { 00:18:46.450 "state": "completed", 00:18:46.450 "digest": "sha512", 00:18:46.450 "dhgroup": "ffdhe4096" 00:18:46.450 } 00:18:46.450 } 00:18:46.450 ]' 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.450 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.748 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:46.748 18:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.332 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.591 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.850 00:18:47.850 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.850 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.850 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.109 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.109 { 00:18:48.109 "cntlid": 123, 00:18:48.109 "qid": 0, 00:18:48.109 "state": "enabled", 00:18:48.109 "thread": "nvmf_tgt_poll_group_000", 00:18:48.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.109 "listen_address": { 00:18:48.109 "trtype": "TCP", 00:18:48.109 "adrfam": "IPv4", 00:18:48.109 "traddr": "10.0.0.2", 00:18:48.109 "trsvcid": "4420" 00:18:48.109 }, 00:18:48.109 "peer_address": { 00:18:48.109 "trtype": "TCP", 00:18:48.110 "adrfam": "IPv4", 00:18:48.110 "traddr": "10.0.0.1", 00:18:48.110 "trsvcid": "38882" 00:18:48.110 }, 00:18:48.110 "auth": { 00:18:48.110 "state": "completed", 00:18:48.110 "digest": "sha512", 00:18:48.110 "dhgroup": "ffdhe4096" 00:18:48.110 } 00:18:48.110 } 00:18:48.110 ]' 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.110 18:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.369 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:48.369 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.308 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.309 18:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.569 00:18:49.569 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.569 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.569 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.829 { 00:18:49.829 "cntlid": 125, 00:18:49.829 "qid": 0, 00:18:49.829 "state": "enabled", 00:18:49.829 "thread": "nvmf_tgt_poll_group_000", 00:18:49.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.829 "listen_address": { 00:18:49.829 "trtype": "TCP", 00:18:49.829 "adrfam": "IPv4", 00:18:49.829 "traddr": "10.0.0.2", 00:18:49.829 "trsvcid": "4420" 00:18:49.829 }, 00:18:49.829 "peer_address": { 00:18:49.829 "trtype": "TCP", 00:18:49.829 "adrfam": "IPv4", 00:18:49.829 "traddr": "10.0.0.1", 00:18:49.829 "trsvcid": "49882" 00:18:49.829 }, 00:18:49.829 "auth": { 00:18:49.829 "state": "completed", 00:18:49.829 "digest": "sha512", 00:18:49.829 "dhgroup": "ffdhe4096" 00:18:49.829 } 00:18:49.829 } 00:18:49.829 ]' 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.829 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.089 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:50.089 18:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:50.658 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.658 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.658 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.658 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.659 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.659 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.659 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.659 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.918 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.178 00:18:51.178 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.178 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.178 18:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.438 { 00:18:51.438 "cntlid": 127, 00:18:51.438 "qid": 0, 00:18:51.438 "state": "enabled", 00:18:51.438 "thread": "nvmf_tgt_poll_group_000", 00:18:51.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.438 "listen_address": { 00:18:51.438 "trtype": "TCP", 00:18:51.438 "adrfam": "IPv4", 00:18:51.438 "traddr": "10.0.0.2", 00:18:51.438 "trsvcid": "4420" 00:18:51.438 }, 00:18:51.438 "peer_address": { 00:18:51.438 "trtype": "TCP", 00:18:51.438 "adrfam": "IPv4", 00:18:51.438 "traddr": "10.0.0.1", 00:18:51.438 "trsvcid": "49916" 00:18:51.438 }, 00:18:51.438 "auth": { 00:18:51.438 "state": "completed", 00:18:51.438 "digest": "sha512", 00:18:51.438 "dhgroup": "ffdhe4096" 00:18:51.438 } 00:18:51.438 } 00:18:51.438 ]' 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.438 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.698 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:51.698 18:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:52.269 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.269 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.269 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.269 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:52.530 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.531 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.792 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.054 { 00:18:53.054 "cntlid": 129, 00:18:53.054 "qid": 0, 00:18:53.054 "state": "enabled", 00:18:53.054 "thread": "nvmf_tgt_poll_group_000", 00:18:53.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.054 "listen_address": { 00:18:53.054 "trtype": "TCP", 00:18:53.054 "adrfam": "IPv4", 00:18:53.054 "traddr": "10.0.0.2", 00:18:53.054 "trsvcid": "4420" 00:18:53.054 }, 00:18:53.054 "peer_address": { 00:18:53.054 "trtype": "TCP", 00:18:53.054 "adrfam": "IPv4", 00:18:53.054 "traddr": "10.0.0.1", 00:18:53.054 "trsvcid": "49944" 00:18:53.054 }, 00:18:53.054 "auth": { 00:18:53.054 "state": "completed", 00:18:53.054 "digest": "sha512", 00:18:53.054 "dhgroup": "ffdhe6144" 00:18:53.054 } 00:18:53.054 } 00:18:53.054 ]' 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.054 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.315 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.315 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.315 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.315 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.315 18:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.315 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:53.315 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.258 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.259 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.259 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.259 18:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.520 00:18:54.520 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.520 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.520 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.782 { 00:18:54.782 "cntlid": 131, 00:18:54.782 "qid": 0, 00:18:54.782 "state": "enabled", 00:18:54.782 "thread": "nvmf_tgt_poll_group_000", 00:18:54.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.782 "listen_address": { 00:18:54.782 "trtype": "TCP", 00:18:54.782 "adrfam": "IPv4", 00:18:54.782 "traddr": "10.0.0.2", 00:18:54.782 "trsvcid": "4420" 00:18:54.782 }, 00:18:54.782 "peer_address": { 00:18:54.782 "trtype": "TCP", 00:18:54.782 "adrfam": "IPv4", 00:18:54.782 "traddr": "10.0.0.1", 00:18:54.782 "trsvcid": "49966" 00:18:54.782 }, 00:18:54.782 "auth": { 00:18:54.782 "state": "completed", 00:18:54.782 "digest": "sha512", 00:18:54.782 "dhgroup": "ffdhe6144" 00:18:54.782 } 00:18:54.782 } 00:18:54.782 ]' 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.782 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.045 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.045 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.045 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.045 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:55.045 18:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:18:55.618 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.879 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.880 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.450 00:18:56.450 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.450 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.450 18:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.450 { 00:18:56.450 "cntlid": 133, 00:18:56.450 "qid": 0, 00:18:56.450 "state": "enabled", 00:18:56.450 "thread": "nvmf_tgt_poll_group_000", 00:18:56.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:56.450 "listen_address": { 00:18:56.450 "trtype": "TCP", 00:18:56.450 "adrfam": "IPv4", 00:18:56.450 "traddr": "10.0.0.2", 00:18:56.450 "trsvcid": "4420" 00:18:56.450 }, 00:18:56.450 "peer_address": { 00:18:56.450 "trtype": "TCP", 00:18:56.450 "adrfam": "IPv4", 00:18:56.450 "traddr": "10.0.0.1", 00:18:56.450 "trsvcid": "50004" 00:18:56.450 }, 00:18:56.450 "auth": { 00:18:56.450 "state": "completed", 00:18:56.450 "digest": "sha512", 00:18:56.450 "dhgroup": "ffdhe6144" 00:18:56.450 } 00:18:56.450 } 00:18:56.450 ]' 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.450 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:56.710 18:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.655 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.916 00:18:57.916 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.916 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.916 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.177 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.177 { 00:18:58.177 "cntlid": 135, 00:18:58.177 "qid": 0, 00:18:58.177 "state": "enabled", 00:18:58.177 "thread": "nvmf_tgt_poll_group_000", 00:18:58.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.177 "listen_address": { 00:18:58.177 "trtype": "TCP", 00:18:58.177 "adrfam": "IPv4", 00:18:58.177 "traddr": "10.0.0.2", 00:18:58.177 "trsvcid": "4420" 00:18:58.178 }, 00:18:58.178 "peer_address": { 00:18:58.178 "trtype": "TCP", 00:18:58.178 "adrfam": "IPv4", 00:18:58.178 "traddr": "10.0.0.1", 00:18:58.178 "trsvcid": "50046" 00:18:58.178 }, 00:18:58.178 "auth": { 00:18:58.178 "state": "completed", 00:18:58.178 "digest": "sha512", 00:18:58.178 "dhgroup": "ffdhe6144" 00:18:58.178 } 00:18:58.178 } 00:18:58.178 ]' 00:18:58.178 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.178 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.178 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.439 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.439 18:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.439 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.439 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.439 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.699 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:58.699 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.271 18:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.532 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.792 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.054 { 00:19:00.054 "cntlid": 137, 00:19:00.054 "qid": 0, 00:19:00.054 "state": "enabled", 00:19:00.054 "thread": "nvmf_tgt_poll_group_000", 00:19:00.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.054 "listen_address": { 00:19:00.054 "trtype": "TCP", 00:19:00.054 "adrfam": "IPv4", 00:19:00.054 "traddr": "10.0.0.2", 00:19:00.054 "trsvcid": "4420" 00:19:00.054 }, 00:19:00.054 "peer_address": { 00:19:00.054 "trtype": "TCP", 00:19:00.054 "adrfam": "IPv4", 00:19:00.054 "traddr": "10.0.0.1", 00:19:00.054 "trsvcid": "34078" 00:19:00.054 }, 00:19:00.054 "auth": { 00:19:00.054 "state": "completed", 00:19:00.054 "digest": "sha512", 00:19:00.054 "dhgroup": "ffdhe8192" 00:19:00.054 } 00:19:00.054 } 00:19:00.054 ]' 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.054 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.315 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.315 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.315 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.315 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.315 18:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.315 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:19:00.315 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.266 18:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.837 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.837 { 00:19:01.837 "cntlid": 139, 00:19:01.837 "qid": 0, 00:19:01.837 "state": "enabled", 00:19:01.837 "thread": "nvmf_tgt_poll_group_000", 00:19:01.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.837 "listen_address": { 00:19:01.837 "trtype": "TCP", 00:19:01.837 "adrfam": "IPv4", 00:19:01.837 "traddr": "10.0.0.2", 00:19:01.837 "trsvcid": "4420" 00:19:01.837 }, 00:19:01.837 "peer_address": { 00:19:01.837 "trtype": "TCP", 00:19:01.837 "adrfam": "IPv4", 00:19:01.837 "traddr": "10.0.0.1", 00:19:01.837 "trsvcid": "34102" 00:19:01.837 }, 00:19:01.837 "auth": { 00:19:01.837 "state": "completed", 00:19:01.837 "digest": "sha512", 00:19:01.837 "dhgroup": "ffdhe8192" 00:19:01.837 } 00:19:01.837 } 00:19:01.837 ]' 00:19:01.837 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.099 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.360 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:19:02.360 18:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: --dhchap-ctrl-secret DHHC-1:02:YjM4YjUzYjFhNTJhYmMxYjBlYzgwZGNlYzFiNDE2NzA0ZGU3ODMzNDhkYWQ4MGMyCpnIpg==: 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.931 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.192 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.193 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.193 18:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.454 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.715 { 00:19:03.715 "cntlid": 141, 00:19:03.715 "qid": 0, 00:19:03.715 "state": "enabled", 00:19:03.715 "thread": "nvmf_tgt_poll_group_000", 00:19:03.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.715 "listen_address": { 00:19:03.715 "trtype": "TCP", 00:19:03.715 "adrfam": "IPv4", 00:19:03.715 "traddr": "10.0.0.2", 00:19:03.715 "trsvcid": "4420" 00:19:03.715 }, 00:19:03.715 "peer_address": { 00:19:03.715 "trtype": "TCP", 00:19:03.715 "adrfam": "IPv4", 00:19:03.715 "traddr": "10.0.0.1", 00:19:03.715 "trsvcid": "34116" 00:19:03.715 }, 00:19:03.715 "auth": { 00:19:03.715 "state": "completed", 00:19:03.715 "digest": "sha512", 00:19:03.715 "dhgroup": "ffdhe8192" 00:19:03.715 } 00:19:03.715 } 00:19:03.715 ]' 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.715 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:19:03.976 18:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:01:MjJhNjFhYmJmZmMyOGM0ZGJmZTc0NTMyOTU3NzljZDKvTIF8: 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.919 18:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.489 00:19:05.489 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.489 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.489 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.749 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.749 { 00:19:05.749 "cntlid": 143, 00:19:05.749 "qid": 0, 00:19:05.749 "state": "enabled", 00:19:05.749 "thread": "nvmf_tgt_poll_group_000", 00:19:05.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.749 "listen_address": { 00:19:05.749 "trtype": "TCP", 00:19:05.749 "adrfam": "IPv4", 00:19:05.749 "traddr": "10.0.0.2", 00:19:05.749 "trsvcid": "4420" 00:19:05.749 }, 00:19:05.749 "peer_address": { 00:19:05.749 "trtype": "TCP", 00:19:05.749 "adrfam": "IPv4", 00:19:05.750 "traddr": "10.0.0.1", 00:19:05.750 "trsvcid": "34146" 00:19:05.750 }, 00:19:05.750 "auth": { 00:19:05.750 "state": "completed", 00:19:05.750 "digest": "sha512", 00:19:05.750 "dhgroup": "ffdhe8192" 00:19:05.750 } 00:19:05.750 } 00:19:05.750 ]' 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.750 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.009 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:06.009 18:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:06.579 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.580 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.841 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.412 00:19:07.412 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.412 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.412 18:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.412 { 00:19:07.412 "cntlid": 145, 00:19:07.412 "qid": 0, 00:19:07.412 "state": "enabled", 00:19:07.412 "thread": "nvmf_tgt_poll_group_000", 00:19:07.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.412 "listen_address": { 00:19:07.412 "trtype": "TCP", 00:19:07.412 "adrfam": "IPv4", 00:19:07.412 "traddr": "10.0.0.2", 00:19:07.412 "trsvcid": "4420" 00:19:07.412 }, 00:19:07.412 "peer_address": { 00:19:07.412 "trtype": "TCP", 00:19:07.412 "adrfam": "IPv4", 00:19:07.412 "traddr": "10.0.0.1", 00:19:07.412 "trsvcid": "34178" 00:19:07.412 }, 00:19:07.412 "auth": { 00:19:07.412 "state": "completed", 00:19:07.412 "digest": "sha512", 00:19:07.412 "dhgroup": "ffdhe8192" 00:19:07.412 } 00:19:07.412 } 00:19:07.412 ]' 00:19:07.412 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.672 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.673 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.933 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:19:07.933 18:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NDlmNWI5NGY0YjU2NmFjMWIxMWQ4ZjhhNmM4YjFjMzAzZDE4ZGY5NTlmNjgxMWI3p5X16Q==: --dhchap-ctrl-secret DHHC-1:03:NWVmYmZmNTI1ODJkNjIwZGQ1MjFlNWQyYzM4NTQ4MWExMDYyNTZlNjQ5ZDZlZmQ0ZjA1ZmY1YzRmN2M3MmIxY9ax51k=: 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:08.504 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:09.075 request: 00:19:09.075 { 00:19:09.075 "name": "nvme0", 00:19:09.075 "trtype": "tcp", 00:19:09.075 "traddr": "10.0.0.2", 00:19:09.075 "adrfam": "ipv4", 00:19:09.075 "trsvcid": "4420", 00:19:09.075 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.075 "prchk_reftag": false, 00:19:09.075 "prchk_guard": false, 00:19:09.075 "hdgst": false, 00:19:09.075 "ddgst": false, 00:19:09.075 "dhchap_key": "key2", 00:19:09.075 "allow_unrecognized_csi": false, 00:19:09.075 "method": "bdev_nvme_attach_controller", 00:19:09.075 "req_id": 1 00:19:09.075 } 00:19:09.075 Got JSON-RPC error response 00:19:09.075 response: 00:19:09.075 { 00:19:09.075 "code": -5, 00:19:09.075 "message": "Input/output error" 00:19:09.075 } 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.075 18:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.336 request: 00:19:09.336 { 00:19:09.336 "name": "nvme0", 00:19:09.336 "trtype": "tcp", 00:19:09.336 "traddr": "10.0.0.2", 00:19:09.336 "adrfam": "ipv4", 00:19:09.336 "trsvcid": "4420", 00:19:09.336 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.336 "prchk_reftag": false, 00:19:09.336 "prchk_guard": false, 00:19:09.336 "hdgst": false, 00:19:09.336 "ddgst": false, 00:19:09.336 "dhchap_key": "key1", 00:19:09.336 "dhchap_ctrlr_key": "ckey2", 00:19:09.336 "allow_unrecognized_csi": false, 00:19:09.336 "method": "bdev_nvme_attach_controller", 00:19:09.336 "req_id": 1 00:19:09.336 } 00:19:09.336 Got JSON-RPC error response 00:19:09.336 response: 00:19:09.336 { 00:19:09.336 "code": -5, 00:19:09.336 "message": "Input/output error" 00:19:09.336 } 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.336 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.596 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.855 request: 00:19:09.855 { 00:19:09.855 "name": "nvme0", 00:19:09.855 "trtype": "tcp", 00:19:09.855 "traddr": "10.0.0.2", 00:19:09.855 "adrfam": "ipv4", 00:19:09.855 "trsvcid": "4420", 00:19:09.856 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.856 "prchk_reftag": false, 00:19:09.856 "prchk_guard": false, 00:19:09.856 "hdgst": false, 00:19:09.856 "ddgst": false, 00:19:09.856 "dhchap_key": "key1", 00:19:09.856 "dhchap_ctrlr_key": "ckey1", 00:19:09.856 "allow_unrecognized_csi": false, 00:19:09.856 "method": "bdev_nvme_attach_controller", 00:19:09.856 "req_id": 1 00:19:09.856 } 00:19:09.856 Got JSON-RPC error response 00:19:09.856 response: 00:19:09.856 { 00:19:09.856 "code": -5, 00:19:09.856 "message": "Input/output error" 00:19:09.856 } 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2106313 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2106313 ']' 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2106313 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.856 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106313 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106313' 00:19:10.115 killing process with pid 2106313 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2106313 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2106313 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2132613 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2132613 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2132613 ']' 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.115 18:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2132613 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2132613 ']' 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.053 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 null0 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mbb 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.PE1 ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PE1 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.u4J 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hFN ]] 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hFN 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.313 18:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.313 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.313 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.jS9 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.HWz ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.HWz 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.qhY 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.314 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.253 nvme0n1 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.253 { 00:19:12.253 "cntlid": 1, 00:19:12.253 "qid": 0, 00:19:12.253 "state": "enabled", 00:19:12.253 "thread": "nvmf_tgt_poll_group_000", 00:19:12.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:12.253 "listen_address": { 00:19:12.253 "trtype": "TCP", 00:19:12.253 "adrfam": "IPv4", 00:19:12.253 "traddr": "10.0.0.2", 00:19:12.253 "trsvcid": "4420" 00:19:12.253 }, 00:19:12.253 "peer_address": { 00:19:12.253 "trtype": "TCP", 00:19:12.253 "adrfam": "IPv4", 00:19:12.253 "traddr": "10.0.0.1", 00:19:12.253 "trsvcid": "33760" 00:19:12.253 }, 00:19:12.253 "auth": { 00:19:12.253 "state": "completed", 00:19:12.253 "digest": "sha512", 00:19:12.253 "dhgroup": "ffdhe8192" 00:19:12.253 } 00:19:12.253 } 00:19:12.253 ]' 00:19:12.253 18:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.253 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.253 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.513 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:12.514 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:13.452 18:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:13.452 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.452 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.452 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.452 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:13.452 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.453 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:13.453 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.453 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.453 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.453 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.713 request: 00:19:13.713 { 00:19:13.713 "name": "nvme0", 00:19:13.713 "trtype": "tcp", 00:19:13.713 "traddr": "10.0.0.2", 00:19:13.713 "adrfam": "ipv4", 00:19:13.713 "trsvcid": "4420", 00:19:13.713 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.713 "prchk_reftag": false, 00:19:13.713 "prchk_guard": false, 00:19:13.713 "hdgst": false, 00:19:13.713 "ddgst": false, 00:19:13.713 "dhchap_key": "key3", 00:19:13.713 "allow_unrecognized_csi": false, 00:19:13.713 "method": "bdev_nvme_attach_controller", 00:19:13.713 "req_id": 1 00:19:13.713 } 00:19:13.713 Got JSON-RPC error response 00:19:13.713 response: 00:19:13.713 { 00:19:13.713 "code": -5, 00:19:13.713 "message": "Input/output error" 00:19:13.713 } 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:13.713 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:13.714 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.714 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.975 request: 00:19:13.975 { 00:19:13.975 "name": "nvme0", 00:19:13.975 "trtype": "tcp", 00:19:13.975 "traddr": "10.0.0.2", 00:19:13.975 "adrfam": "ipv4", 00:19:13.975 "trsvcid": "4420", 00:19:13.975 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.975 "prchk_reftag": false, 00:19:13.975 "prchk_guard": false, 00:19:13.975 "hdgst": false, 00:19:13.975 "ddgst": false, 00:19:13.975 "dhchap_key": "key3", 00:19:13.975 "allow_unrecognized_csi": false, 00:19:13.975 "method": "bdev_nvme_attach_controller", 00:19:13.975 "req_id": 1 00:19:13.975 } 00:19:13.975 Got JSON-RPC error response 00:19:13.975 response: 00:19:13.975 { 00:19:13.975 "code": -5, 00:19:13.975 "message": "Input/output error" 00:19:13.975 } 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.975 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.236 18:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:14.497 request: 00:19:14.497 { 00:19:14.497 "name": "nvme0", 00:19:14.497 "trtype": "tcp", 00:19:14.497 "traddr": "10.0.0.2", 00:19:14.497 "adrfam": "ipv4", 00:19:14.497 "trsvcid": "4420", 00:19:14.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:14.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.497 "prchk_reftag": false, 00:19:14.497 "prchk_guard": false, 00:19:14.497 "hdgst": false, 00:19:14.497 "ddgst": false, 00:19:14.497 "dhchap_key": "key0", 00:19:14.497 "dhchap_ctrlr_key": "key1", 00:19:14.497 "allow_unrecognized_csi": false, 00:19:14.497 "method": "bdev_nvme_attach_controller", 00:19:14.497 "req_id": 1 00:19:14.497 } 00:19:14.497 Got JSON-RPC error response 00:19:14.497 response: 00:19:14.497 { 00:19:14.497 "code": -5, 00:19:14.497 "message": "Input/output error" 00:19:14.497 } 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:14.757 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.758 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.758 nvme0n1 00:19:14.758 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:14.758 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:14.758 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.018 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.018 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.018 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:15.278 18:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:15.846 nvme0n1 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:16.105 18:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.365 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.365 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:16.365 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: --dhchap-ctrl-secret DHHC-1:03:ZjcxOTk3NTA0MmZiZGMxOGQ4ZDBkZjlhOTQyMWRiNWRkYTU2ZDlhNjY1NWE4OTEzOTczMDQxOWU4NmJiN2Q5MWcg9to=: 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.935 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:17.196 18:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:17.768 request: 00:19:17.768 { 00:19:17.768 "name": "nvme0", 00:19:17.768 "trtype": "tcp", 00:19:17.768 "traddr": "10.0.0.2", 00:19:17.768 "adrfam": "ipv4", 00:19:17.768 "trsvcid": "4420", 00:19:17.768 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.768 "prchk_reftag": false, 00:19:17.768 "prchk_guard": false, 00:19:17.768 "hdgst": false, 00:19:17.768 "ddgst": false, 00:19:17.768 "dhchap_key": "key1", 00:19:17.768 "allow_unrecognized_csi": false, 00:19:17.768 "method": "bdev_nvme_attach_controller", 00:19:17.768 "req_id": 1 00:19:17.768 } 00:19:17.768 Got JSON-RPC error response 00:19:17.768 response: 00:19:17.768 { 00:19:17.768 "code": -5, 00:19:17.768 "message": "Input/output error" 00:19:17.768 } 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.768 18:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:18.339 nvme0n1 00:19:18.339 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:18.339 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:18.339 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.601 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.862 nvme0n1 00:19:18.862 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:18.862 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:18.862 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.122 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.122 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.122 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: '' 2s 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: ]] 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWE4NzM3OGJmM2E4ZGU1NzU2ZGM1NjlmMTM2ZWE3Zjgma5Nj: 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:19.383 18:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:21.295 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:21.295 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:21.295 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:21.295 18:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: 2s 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: ]] 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2RlMmYxNTc4MTNiMTQxZWY1OTIwNjJkYjZmZTQyMzQwOTI1NzJjYWJmY2UwMmQzHLDRkg==: 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:21.295 18:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.845 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:24.105 nvme0n1 00:19:24.105 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.105 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.105 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.105 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.105 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.106 18:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.679 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:24.679 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:24.679 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:24.940 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.202 18:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.776 request: 00:19:25.776 { 00:19:25.776 "name": "nvme0", 00:19:25.776 "dhchap_key": "key1", 00:19:25.776 "dhchap_ctrlr_key": "key3", 00:19:25.776 "method": "bdev_nvme_set_keys", 00:19:25.776 "req_id": 1 00:19:25.776 } 00:19:25.776 Got JSON-RPC error response 00:19:25.776 response: 00:19:25.776 { 00:19:25.776 "code": -13, 00:19:25.776 "message": "Permission denied" 00:19:25.776 } 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:25.776 18:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.175 18:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.893 nvme0n1 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.893 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:28.464 request: 00:19:28.464 { 00:19:28.464 "name": "nvme0", 00:19:28.464 "dhchap_key": "key2", 00:19:28.464 "dhchap_ctrlr_key": "key0", 00:19:28.464 "method": "bdev_nvme_set_keys", 00:19:28.464 "req_id": 1 00:19:28.464 } 00:19:28.464 Got JSON-RPC error response 00:19:28.464 response: 00:19:28.464 { 00:19:28.464 "code": -13, 00:19:28.464 "message": "Permission denied" 00:19:28.464 } 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:28.464 18:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.464 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:28.464 18:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2106333 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2106333 ']' 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2106333 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106333 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106333' 00:19:29.848 killing process with pid 2106333 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2106333 00:19:29.848 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2106333 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.108 rmmod nvme_tcp 00:19:30.108 rmmod nvme_fabrics 00:19:30.108 rmmod nvme_keyring 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2132613 ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2132613 ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2132613' 00:19:30.108 killing process with pid 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2132613 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:30.108 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.368 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.368 18:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.275 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:32.275 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Mbb /tmp/spdk.key-sha256.u4J /tmp/spdk.key-sha384.jS9 /tmp/spdk.key-sha512.qhY /tmp/spdk.key-sha512.PE1 /tmp/spdk.key-sha384.hFN /tmp/spdk.key-sha256.HWz '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:32.275 00:19:32.275 real 2m36.419s 00:19:32.275 user 5m52.853s 00:19:32.275 sys 0m24.846s 00:19:32.275 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.275 18:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.275 ************************************ 00:19:32.275 END TEST nvmf_auth_target 00:19:32.275 ************************************ 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:32.275 ************************************ 00:19:32.275 START TEST nvmf_bdevio_no_huge 00:19:32.275 ************************************ 00:19:32.275 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:32.537 * Looking for test storage... 00:19:32.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:32.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.537 --rc genhtml_branch_coverage=1 00:19:32.537 --rc genhtml_function_coverage=1 00:19:32.537 --rc genhtml_legend=1 00:19:32.537 --rc geninfo_all_blocks=1 00:19:32.537 --rc geninfo_unexecuted_blocks=1 00:19:32.537 00:19:32.537 ' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:32.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.537 --rc genhtml_branch_coverage=1 00:19:32.537 --rc genhtml_function_coverage=1 00:19:32.537 --rc genhtml_legend=1 00:19:32.537 --rc geninfo_all_blocks=1 00:19:32.537 --rc geninfo_unexecuted_blocks=1 00:19:32.537 00:19:32.537 ' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:32.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.537 --rc genhtml_branch_coverage=1 00:19:32.537 --rc genhtml_function_coverage=1 00:19:32.537 --rc genhtml_legend=1 00:19:32.537 --rc geninfo_all_blocks=1 00:19:32.537 --rc geninfo_unexecuted_blocks=1 00:19:32.537 00:19:32.537 ' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:32.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.537 --rc genhtml_branch_coverage=1 00:19:32.537 --rc genhtml_function_coverage=1 00:19:32.537 --rc genhtml_legend=1 00:19:32.537 --rc geninfo_all_blocks=1 00:19:32.537 --rc geninfo_unexecuted_blocks=1 00:19:32.537 00:19:32.537 ' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.537 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.538 18:31:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:40.683 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:40.684 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:40.684 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:40.684 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:40.684 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:19:40.684 00:19:40.684 --- 10.0.0.2 ping statistics --- 00:19:40.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.684 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:19:40.684 00:19:40.684 --- 10.0.0.1 ping statistics --- 00:19:40.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.684 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.684 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2140771 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2140771 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2140771 ']' 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.685 18:31:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.685 [2024-12-06 18:31:34.847548] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:19:40.685 [2024-12-06 18:31:34.847618] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:40.685 [2024-12-06 18:31:34.953285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.685 [2024-12-06 18:31:35.013843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.685 [2024-12-06 18:31:35.013892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.685 [2024-12-06 18:31:35.013901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.685 [2024-12-06 18:31:35.013909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.685 [2024-12-06 18:31:35.013915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.685 [2024-12-06 18:31:35.015433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.685 [2024-12-06 18:31:35.015571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:40.685 [2024-12-06 18:31:35.015734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:40.685 [2024-12-06 18:31:35.015935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.945 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 [2024-12-06 18:31:35.727652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:41.206 Malloc0 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:41.206 [2024-12-06 18:31:35.781850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:41.206 { 00:19:41.206 "params": { 00:19:41.206 "name": "Nvme$subsystem", 00:19:41.206 "trtype": "$TEST_TRANSPORT", 00:19:41.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.206 "adrfam": "ipv4", 00:19:41.206 "trsvcid": "$NVMF_PORT", 00:19:41.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.206 "hdgst": ${hdgst:-false}, 00:19:41.206 "ddgst": ${ddgst:-false} 00:19:41.206 }, 00:19:41.206 "method": "bdev_nvme_attach_controller" 00:19:41.206 } 00:19:41.206 EOF 00:19:41.206 )") 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:41.206 18:31:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:41.206 "params": { 00:19:41.206 "name": "Nvme1", 00:19:41.206 "trtype": "tcp", 00:19:41.206 "traddr": "10.0.0.2", 00:19:41.206 "adrfam": "ipv4", 00:19:41.206 "trsvcid": "4420", 00:19:41.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.206 "hdgst": false, 00:19:41.206 "ddgst": false 00:19:41.206 }, 00:19:41.206 "method": "bdev_nvme_attach_controller" 00:19:41.206 }' 00:19:41.206 [2024-12-06 18:31:35.841141] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:19:41.206 [2024-12-06 18:31:35.841211] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2141126 ] 00:19:41.206 [2024-12-06 18:31:35.939218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.467 [2024-12-06 18:31:36.000205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.467 [2024-12-06 18:31:36.000370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.467 [2024-12-06 18:31:36.000370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.727 I/O targets: 00:19:41.727 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:41.728 00:19:41.728 00:19:41.728 CUnit - A unit testing framework for C - Version 2.1-3 00:19:41.728 http://cunit.sourceforge.net/ 00:19:41.728 00:19:41.728 00:19:41.728 Suite: bdevio tests on: Nvme1n1 00:19:41.728 Test: blockdev write read block ...passed 00:19:41.728 Test: blockdev write zeroes read block ...passed 00:19:41.728 Test: blockdev write zeroes read no split ...passed 00:19:41.728 Test: blockdev write zeroes read split ...passed 00:19:41.728 Test: blockdev write zeroes read split partial ...passed 00:19:41.728 Test: blockdev reset ...[2024-12-06 18:31:36.446094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:41.728 [2024-12-06 18:31:36.446197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9fa430 (9): Bad file descriptor 00:19:41.728 [2024-12-06 18:31:36.501499] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:41.728 passed 00:19:41.728 Test: blockdev write read 8 blocks ...passed 00:19:41.728 Test: blockdev write read size > 128k ...passed 00:19:41.728 Test: blockdev write read invalid size ...passed 00:19:41.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:41.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:41.988 Test: blockdev write read max offset ...passed 00:19:41.988 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:41.988 Test: blockdev writev readv 8 blocks ...passed 00:19:41.988 Test: blockdev writev readv 30 x 1block ...passed 00:19:41.988 Test: blockdev writev readv block ...passed 00:19:41.988 Test: blockdev writev readv size > 128k ...passed 00:19:41.988 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:41.988 Test: blockdev comparev and writev ...[2024-12-06 18:31:36.682566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-06 18:31:36.682614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:41.988 [2024-12-06 18:31:36.682631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-06 18:31:36.682646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:41.988 [2024-12-06 18:31:36.683081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-06 18:31:36.683098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:41.988 [2024-12-06 18:31:36.683114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-06 18:31:36.683123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:41.988 [2024-12-06 18:31:36.683577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.988 [2024-12-06 18:31:36.683591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.683618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.989 [2024-12-06 18:31:36.683627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.684032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.989 [2024-12-06 18:31:36.684048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.684062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.989 [2024-12-06 18:31:36.684071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:41.989 passed 00:19:41.989 Test: blockdev nvme passthru rw ...passed 00:19:41.989 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:31:36.768145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.989 [2024-12-06 18:31:36.768168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.768430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.989 [2024-12-06 18:31:36.768443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.768696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.989 [2024-12-06 18:31:36.768712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:41.989 [2024-12-06 18:31:36.768968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.989 [2024-12-06 18:31:36.768982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:41.989 passed 00:19:42.249 Test: blockdev nvme admin passthru ...passed 00:19:42.249 Test: blockdev copy ...passed 00:19:42.249 00:19:42.249 Run Summary: Type Total Ran Passed Failed Inactive 00:19:42.249 suites 1 1 n/a 0 0 00:19:42.249 tests 23 23 23 0 0 00:19:42.249 asserts 152 152 152 0 n/a 00:19:42.249 00:19:42.249 Elapsed time = 1.062 seconds 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.509 rmmod nvme_tcp 00:19:42.509 rmmod nvme_fabrics 00:19:42.509 rmmod nvme_keyring 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2140771 ']' 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2140771 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2140771 ']' 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2140771 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140771 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140771' 00:19:42.509 killing process with pid 2140771 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2140771 00:19:42.509 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2140771 00:19:43.080 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.080 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.080 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.080 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.081 18:31:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.993 00:19:44.993 real 0m12.621s 00:19:44.993 user 0m14.582s 00:19:44.993 sys 0m6.702s 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:44.993 ************************************ 00:19:44.993 END TEST nvmf_bdevio_no_huge 00:19:44.993 ************************************ 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.993 ************************************ 00:19:44.993 START TEST nvmf_tls 00:19:44.993 ************************************ 00:19:44.993 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:45.255 * Looking for test storage... 00:19:45.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:45.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.255 --rc genhtml_branch_coverage=1 00:19:45.255 --rc genhtml_function_coverage=1 00:19:45.255 --rc genhtml_legend=1 00:19:45.255 --rc geninfo_all_blocks=1 00:19:45.255 --rc geninfo_unexecuted_blocks=1 00:19:45.255 00:19:45.255 ' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:45.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.255 --rc genhtml_branch_coverage=1 00:19:45.255 --rc genhtml_function_coverage=1 00:19:45.255 --rc genhtml_legend=1 00:19:45.255 --rc geninfo_all_blocks=1 00:19:45.255 --rc geninfo_unexecuted_blocks=1 00:19:45.255 00:19:45.255 ' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:45.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.255 --rc genhtml_branch_coverage=1 00:19:45.255 --rc genhtml_function_coverage=1 00:19:45.255 --rc genhtml_legend=1 00:19:45.255 --rc geninfo_all_blocks=1 00:19:45.255 --rc geninfo_unexecuted_blocks=1 00:19:45.255 00:19:45.255 ' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:45.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.255 --rc genhtml_branch_coverage=1 00:19:45.255 --rc genhtml_function_coverage=1 00:19:45.255 --rc genhtml_legend=1 00:19:45.255 --rc geninfo_all_blocks=1 00:19:45.255 --rc geninfo_unexecuted_blocks=1 00:19:45.255 00:19:45.255 ' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.255 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.256 18:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:53.399 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:53.399 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:53.399 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:53.399 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.399 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:53.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:19:53.400 00:19:53.400 --- 10.0.0.2 ping statistics --- 00:19:53.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.400 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:19:53.400 00:19:53.400 --- 10.0.0.1 ping statistics --- 00:19:53.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.400 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2145469 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2145469 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2145469 ']' 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.400 18:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.400 [2024-12-06 18:31:47.511208] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:19:53.400 [2024-12-06 18:31:47.511267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.400 [2024-12-06 18:31:47.612304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.400 [2024-12-06 18:31:47.663790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.400 [2024-12-06 18:31:47.663846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.400 [2024-12-06 18:31:47.663855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.400 [2024-12-06 18:31:47.663862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.400 [2024-12-06 18:31:47.663869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.400 [2024-12-06 18:31:47.664657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.660 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:53.661 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:53.921 true 00:19:53.921 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.921 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:54.182 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:54.182 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:54.182 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:54.443 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.443 18:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:54.443 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:54.443 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:54.443 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:54.704 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.704 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:54.966 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:55.227 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:55.227 18:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:55.489 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:55.489 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:55.489 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:55.489 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:55.489 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MczH2BUTmX 00:19:55.751 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.hBqgLoJ2i6 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MczH2BUTmX 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.hBqgLoJ2i6 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:56.012 18:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:56.271 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MczH2BUTmX 00:19:56.271 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MczH2BUTmX 00:19:56.271 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:56.532 [2024-12-06 18:31:51.154924] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.532 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.794 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.794 [2024-12-06 18:31:51.523818] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.794 [2024-12-06 18:31:51.524014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.794 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:57.054 malloc0 00:19:57.054 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:57.316 18:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MczH2BUTmX 00:19:57.576 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:57.576 18:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MczH2BUTmX 00:20:09.812 Initializing NVMe Controllers 00:20:09.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.812 Initialization complete. Launching workers. 00:20:09.812 ======================================================== 00:20:09.812 Latency(us) 00:20:09.812 Device Information : IOPS MiB/s Average min max 00:20:09.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18559.88 72.50 3448.45 1504.29 6074.63 00:20:09.812 ======================================================== 00:20:09.812 Total : 18559.88 72.50 3448.45 1504.29 6074.63 00:20:09.812 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MczH2BUTmX 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MczH2BUTmX 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2148523 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2148523 /var/tmp/bdevperf.sock 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.812 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2148523 ']' 00:20:09.813 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.813 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.813 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.813 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.813 18:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.813 [2024-12-06 18:32:02.448461] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:09.813 [2024-12-06 18:32:02.448513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148523 ] 00:20:09.813 [2024-12-06 18:32:02.538129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.813 [2024-12-06 18:32:02.573592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MczH2BUTmX 00:20:09.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.813 [2024-12-06 18:32:03.586419] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.813 TLSTESTn1 00:20:09.813 18:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:09.813 Running I/O for 10 seconds... 00:20:11.013 4305.00 IOPS, 16.82 MiB/s [2024-12-06T17:32:07.179Z] 4914.00 IOPS, 19.20 MiB/s [2024-12-06T17:32:08.120Z] 5150.00 IOPS, 20.12 MiB/s [2024-12-06T17:32:09.060Z] 5118.50 IOPS, 19.99 MiB/s [2024-12-06T17:32:10.013Z] 5223.20 IOPS, 20.40 MiB/s [2024-12-06T17:32:10.955Z] 5241.17 IOPS, 20.47 MiB/s [2024-12-06T17:32:11.896Z] 5367.29 IOPS, 20.97 MiB/s [2024-12-06T17:32:12.839Z] 5368.25 IOPS, 20.97 MiB/s [2024-12-06T17:32:14.225Z] 5315.78 IOPS, 20.76 MiB/s [2024-12-06T17:32:14.225Z] 5324.00 IOPS, 20.80 MiB/s 00:20:19.441 Latency(us) 00:20:19.441 [2024-12-06T17:32:14.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.441 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.441 Verification LBA range: start 0x0 length 0x2000 00:20:19.441 TLSTESTn1 : 10.01 5330.61 20.82 0.00 0.00 23978.02 4532.91 27852.80 00:20:19.441 [2024-12-06T17:32:14.225Z] =================================================================================================================== 00:20:19.441 [2024-12-06T17:32:14.225Z] Total : 5330.61 20.82 0.00 0.00 23978.02 4532.91 27852.80 00:20:19.441 { 00:20:19.441 "results": [ 00:20:19.441 { 00:20:19.441 "job": "TLSTESTn1", 00:20:19.441 "core_mask": "0x4", 00:20:19.441 "workload": "verify", 00:20:19.441 "status": "finished", 00:20:19.441 "verify_range": { 00:20:19.441 "start": 0, 00:20:19.441 "length": 8192 00:20:19.441 }, 00:20:19.441 "queue_depth": 128, 00:20:19.441 "io_size": 4096, 00:20:19.441 "runtime": 10.01142, 00:20:19.441 "iops": 5330.612440592843, 00:20:19.441 "mibps": 20.822704846065793, 00:20:19.441 "io_failed": 0, 00:20:19.441 "io_timeout": 0, 00:20:19.441 "avg_latency_us": 23978.02073840888, 00:20:19.441 "min_latency_us": 4532.906666666667, 00:20:19.441 "max_latency_us": 27852.8 00:20:19.441 } 00:20:19.441 ], 00:20:19.441 "core_count": 1 00:20:19.441 } 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2148523 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2148523 ']' 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2148523 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148523 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148523' 00:20:19.441 killing process with pid 2148523 00:20:19.441 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2148523 00:20:19.442 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.442 00:20:19.442 Latency(us) 00:20:19.442 [2024-12-06T17:32:14.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.442 [2024-12-06T17:32:14.226Z] =================================================================================================================== 00:20:19.442 [2024-12-06T17:32:14.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.442 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2148523 00:20:19.442 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBqgLoJ2i6 00:20:19.442 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:19.442 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBqgLoJ2i6 00:20:19.442 18:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hBqgLoJ2i6 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hBqgLoJ2i6 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2150795 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2150795 /var/tmp/bdevperf.sock 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2150795 ']' 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.442 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.442 [2024-12-06 18:32:14.055720] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:19.442 [2024-12-06 18:32:14.055775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150795 ] 00:20:19.442 [2024-12-06 18:32:14.141899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.442 [2024-12-06 18:32:14.170770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.382 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.382 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:20.382 18:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hBqgLoJ2i6 00:20:20.382 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:20.642 [2024-12-06 18:32:15.194573] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.642 [2024-12-06 18:32:15.199015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.642 [2024-12-06 18:32:15.199630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10800 (107): Transport endpoint is not connected 00:20:20.643 [2024-12-06 18:32:15.200624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10800 (9): Bad file descriptor 00:20:20.643 [2024-12-06 18:32:15.201626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:20.643 [2024-12-06 18:32:15.201635] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.643 [2024-12-06 18:32:15.201644] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:20.643 [2024-12-06 18:32:15.201657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:20.643 request: 00:20:20.643 { 00:20:20.643 "name": "TLSTEST", 00:20:20.643 "trtype": "tcp", 00:20:20.643 "traddr": "10.0.0.2", 00:20:20.643 "adrfam": "ipv4", 00:20:20.643 "trsvcid": "4420", 00:20:20.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.643 "prchk_reftag": false, 00:20:20.643 "prchk_guard": false, 00:20:20.643 "hdgst": false, 00:20:20.643 "ddgst": false, 00:20:20.643 "psk": "key0", 00:20:20.643 "allow_unrecognized_csi": false, 00:20:20.643 "method": "bdev_nvme_attach_controller", 00:20:20.643 "req_id": 1 00:20:20.643 } 00:20:20.643 Got JSON-RPC error response 00:20:20.643 response: 00:20:20.643 { 00:20:20.643 "code": -5, 00:20:20.643 "message": "Input/output error" 00:20:20.643 } 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2150795 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2150795 ']' 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2150795 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2150795 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2150795' 00:20:20.643 killing process with pid 2150795 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2150795 00:20:20.643 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.643 00:20:20.643 Latency(us) 00:20:20.643 [2024-12-06T17:32:15.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.643 [2024-12-06T17:32:15.427Z] =================================================================================================================== 00:20:20.643 [2024-12-06T17:32:15.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2150795 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MczH2BUTmX 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MczH2BUTmX 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MczH2BUTmX 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MczH2BUTmX 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2150979 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2150979 /var/tmp/bdevperf.sock 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2150979 ']' 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.643 18:32:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.903 [2024-12-06 18:32:15.445201] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:20.903 [2024-12-06 18:32:15.445256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150979 ] 00:20:20.903 [2024-12-06 18:32:15.531138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.903 [2024-12-06 18:32:15.559605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.474 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.474 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:21.474 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MczH2BUTmX 00:20:21.734 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:21.996 [2024-12-06 18:32:16.567441] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.996 [2024-12-06 18:32:16.572086] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:21.996 [2024-12-06 18:32:16.572107] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:21.996 [2024-12-06 18:32:16.572128] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.996 [2024-12-06 18:32:16.572625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5800 (107): Transport endpoint is not connected 00:20:21.996 [2024-12-06 18:32:16.573620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd5800 (9): Bad file descriptor 00:20:21.996 [2024-12-06 18:32:16.574621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:21.996 [2024-12-06 18:32:16.574629] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.996 [2024-12-06 18:32:16.574642] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:21.996 [2024-12-06 18:32:16.574652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:21.996 request: 00:20:21.996 { 00:20:21.996 "name": "TLSTEST", 00:20:21.996 "trtype": "tcp", 00:20:21.996 "traddr": "10.0.0.2", 00:20:21.996 "adrfam": "ipv4", 00:20:21.996 "trsvcid": "4420", 00:20:21.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.996 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:21.996 "prchk_reftag": false, 00:20:21.996 "prchk_guard": false, 00:20:21.996 "hdgst": false, 00:20:21.996 "ddgst": false, 00:20:21.996 "psk": "key0", 00:20:21.996 "allow_unrecognized_csi": false, 00:20:21.996 "method": "bdev_nvme_attach_controller", 00:20:21.996 "req_id": 1 00:20:21.996 } 00:20:21.996 Got JSON-RPC error response 00:20:21.996 response: 00:20:21.996 { 00:20:21.996 "code": -5, 00:20:21.996 "message": "Input/output error" 00:20:21.996 } 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2150979 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2150979 ']' 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2150979 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2150979 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2150979' 00:20:21.996 killing process with pid 2150979 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2150979 00:20:21.996 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.996 00:20:21.996 Latency(us) 00:20:21.996 [2024-12-06T17:32:16.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.996 [2024-12-06T17:32:16.780Z] =================================================================================================================== 00:20:21.996 [2024-12-06T17:32:16.780Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2150979 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MczH2BUTmX 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MczH2BUTmX 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MczH2BUTmX 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MczH2BUTmX 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2151234 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2151234 /var/tmp/bdevperf.sock 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2151234 ']' 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.996 18:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.257 [2024-12-06 18:32:16.817620] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:22.257 [2024-12-06 18:32:16.817680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151234 ] 00:20:22.257 [2024-12-06 18:32:16.902775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.257 [2024-12-06 18:32:16.931234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.202 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.202 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.202 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MczH2BUTmX 00:20:23.202 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.203 [2024-12-06 18:32:17.951215] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.203 [2024-12-06 18:32:17.955686] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:23.203 [2024-12-06 18:32:17.955707] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:23.203 [2024-12-06 18:32:17.955725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:23.203 [2024-12-06 18:32:17.956365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a7800 (107): Transport endpoint is not connected 00:20:23.203 [2024-12-06 18:32:17.957359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a7800 (9): Bad file descriptor 00:20:23.203 [2024-12-06 18:32:17.958361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:23.203 [2024-12-06 18:32:17.958369] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:23.203 [2024-12-06 18:32:17.958375] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:23.203 [2024-12-06 18:32:17.958384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:23.203 request: 00:20:23.203 { 00:20:23.203 "name": "TLSTEST", 00:20:23.203 "trtype": "tcp", 00:20:23.203 "traddr": "10.0.0.2", 00:20:23.203 "adrfam": "ipv4", 00:20:23.203 "trsvcid": "4420", 00:20:23.203 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:23.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.203 "prchk_reftag": false, 00:20:23.203 "prchk_guard": false, 00:20:23.203 "hdgst": false, 00:20:23.203 "ddgst": false, 00:20:23.203 "psk": "key0", 00:20:23.203 "allow_unrecognized_csi": false, 00:20:23.203 "method": "bdev_nvme_attach_controller", 00:20:23.203 "req_id": 1 00:20:23.203 } 00:20:23.203 Got JSON-RPC error response 00:20:23.203 response: 00:20:23.203 { 00:20:23.203 "code": -5, 00:20:23.203 "message": "Input/output error" 00:20:23.203 } 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2151234 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2151234 ']' 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2151234 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:23.465 18:32:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2151234 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2151234' 00:20:23.465 killing process with pid 2151234 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2151234 00:20:23.465 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.465 00:20:23.465 Latency(us) 00:20:23.465 [2024-12-06T17:32:18.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.465 [2024-12-06T17:32:18.249Z] =================================================================================================================== 00:20:23.465 [2024-12-06T17:32:18.249Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2151234 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2151576 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2151576 /var/tmp/bdevperf.sock 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2151576 ']' 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:23.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.465 18:32:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.465 [2024-12-06 18:32:18.208810] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:23.465 [2024-12-06 18:32:18.208866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151576 ] 00:20:23.724 [2024-12-06 18:32:18.294788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.724 [2024-12-06 18:32:18.322946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.295 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.295 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:24.296 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:24.557 [2024-12-06 18:32:19.146008] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:24.557 [2024-12-06 18:32:19.146037] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:24.557 request: 00:20:24.557 { 00:20:24.557 "name": "key0", 00:20:24.557 "path": "", 00:20:24.557 "method": "keyring_file_add_key", 00:20:24.557 "req_id": 1 00:20:24.557 } 00:20:24.557 Got JSON-RPC error response 00:20:24.557 response: 00:20:24.557 { 00:20:24.557 "code": -1, 00:20:24.557 "message": "Operation not permitted" 00:20:24.557 } 00:20:24.557 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:24.557 [2024-12-06 18:32:19.330552] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.557 [2024-12-06 18:32:19.330583] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:24.557 request: 00:20:24.557 { 00:20:24.557 "name": "TLSTEST", 00:20:24.557 "trtype": "tcp", 00:20:24.557 "traddr": "10.0.0.2", 00:20:24.557 "adrfam": "ipv4", 00:20:24.557 "trsvcid": "4420", 00:20:24.557 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.557 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.557 "prchk_reftag": false, 00:20:24.557 "prchk_guard": false, 00:20:24.557 "hdgst": false, 00:20:24.557 "ddgst": false, 00:20:24.557 "psk": "key0", 00:20:24.557 "allow_unrecognized_csi": false, 00:20:24.557 "method": "bdev_nvme_attach_controller", 00:20:24.557 "req_id": 1 00:20:24.557 } 00:20:24.557 Got JSON-RPC error response 00:20:24.557 response: 00:20:24.557 { 00:20:24.557 "code": -126, 00:20:24.557 "message": "Required key not available" 00:20:24.557 } 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2151576 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2151576 ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2151576 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2151576 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2151576' 00:20:24.818 killing process with pid 2151576 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2151576 00:20:24.818 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.818 00:20:24.818 Latency(us) 00:20:24.818 [2024-12-06T17:32:19.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.818 [2024-12-06T17:32:19.602Z] =================================================================================================================== 00:20:24.818 [2024-12-06T17:32:19.602Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2151576 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2145469 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2145469 ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2145469 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145469 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145469' 00:20:24.818 killing process with pid 2145469 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2145469 00:20:24.818 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2145469 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.CbCNiDTonl 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.CbCNiDTonl 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2151929 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2151929 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2151929 ']' 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.081 18:32:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.081 [2024-12-06 18:32:19.815232] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:25.081 [2024-12-06 18:32:19.815289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.343 [2024-12-06 18:32:19.906890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.343 [2024-12-06 18:32:19.937547] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.343 [2024-12-06 18:32:19.937579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.343 [2024-12-06 18:32:19.937585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.343 [2024-12-06 18:32:19.937590] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.343 [2024-12-06 18:32:19.937594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.343 [2024-12-06 18:32:19.938078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CbCNiDTonl 00:20:25.914 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.175 [2024-12-06 18:32:20.804208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.175 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:26.436 18:32:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:26.436 [2024-12-06 18:32:21.153057] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:26.436 [2024-12-06 18:32:21.153250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.436 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:26.698 malloc0 00:20:26.698 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:26.959 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:26.959 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CbCNiDTonl 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CbCNiDTonl 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2152300 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2152300 /var/tmp/bdevperf.sock 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2152300 ']' 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.220 18:32:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:27.220 [2024-12-06 18:32:21.944480] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:27.220 [2024-12-06 18:32:21.944530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2152300 ] 00:20:27.484 [2024-12-06 18:32:22.004976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.484 [2024-12-06 18:32:22.033683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.484 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.484 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:27.484 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:27.853 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:27.853 [2024-12-06 18:32:22.451982] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.853 TLSTESTn1 00:20:27.853 18:32:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:28.188 Running I/O for 10 seconds... 00:20:30.067 6373.00 IOPS, 24.89 MiB/s [2024-12-06T17:32:25.790Z] 6572.50 IOPS, 25.67 MiB/s [2024-12-06T17:32:26.729Z] 6647.33 IOPS, 25.97 MiB/s [2024-12-06T17:32:27.671Z] 6709.50 IOPS, 26.21 MiB/s [2024-12-06T17:32:29.055Z] 6717.40 IOPS, 26.24 MiB/s [2024-12-06T17:32:29.997Z] 6738.33 IOPS, 26.32 MiB/s [2024-12-06T17:32:30.939Z] 6732.29 IOPS, 26.30 MiB/s [2024-12-06T17:32:31.881Z] 6728.12 IOPS, 26.28 MiB/s [2024-12-06T17:32:32.822Z] 6688.89 IOPS, 26.13 MiB/s [2024-12-06T17:32:32.822Z] 6650.50 IOPS, 25.98 MiB/s 00:20:38.038 Latency(us) 00:20:38.038 [2024-12-06T17:32:32.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.038 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:38.038 Verification LBA range: start 0x0 length 0x2000 00:20:38.038 TLSTESTn1 : 10.01 6654.82 26.00 0.00 0.00 19205.47 4724.05 25340.59 00:20:38.038 [2024-12-06T17:32:32.822Z] =================================================================================================================== 00:20:38.038 [2024-12-06T17:32:32.822Z] Total : 6654.82 26.00 0.00 0.00 19205.47 4724.05 25340.59 00:20:38.038 { 00:20:38.038 "results": [ 00:20:38.038 { 00:20:38.038 "job": "TLSTESTn1", 00:20:38.038 "core_mask": "0x4", 00:20:38.038 "workload": "verify", 00:20:38.038 "status": "finished", 00:20:38.038 "verify_range": { 00:20:38.038 "start": 0, 00:20:38.038 "length": 8192 00:20:38.038 }, 00:20:38.038 "queue_depth": 128, 00:20:38.038 "io_size": 4096, 00:20:38.038 "runtime": 10.012585, 00:20:38.038 "iops": 6654.824902859751, 00:20:38.038 "mibps": 25.995409776795903, 00:20:38.038 "io_failed": 0, 00:20:38.038 "io_timeout": 0, 00:20:38.038 "avg_latency_us": 19205.46813623084, 00:20:38.038 "min_latency_us": 4724.053333333333, 00:20:38.038 "max_latency_us": 25340.586666666666 00:20:38.038 } 00:20:38.038 ], 00:20:38.038 "core_count": 1 00:20:38.038 } 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2152300 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2152300 ']' 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2152300 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2152300 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:38.038 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2152300' 00:20:38.038 killing process with pid 2152300 00:20:38.039 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2152300 00:20:38.039 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.039 00:20:38.039 Latency(us) 00:20:38.039 [2024-12-06T17:32:32.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.039 [2024-12-06T17:32:32.823Z] =================================================================================================================== 00:20:38.039 [2024-12-06T17:32:32.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.039 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2152300 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.CbCNiDTonl 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CbCNiDTonl 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CbCNiDTonl 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:38.299 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.CbCNiDTonl 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.CbCNiDTonl 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2154489 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2154489 /var/tmp/bdevperf.sock 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2154489 ']' 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:38.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.300 18:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.300 [2024-12-06 18:32:32.934427] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:38.300 [2024-12-06 18:32:32.934490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154489 ] 00:20:38.300 [2024-12-06 18:32:33.016045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.300 [2024-12-06 18:32:33.045090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.242 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.242 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.242 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:39.242 [2024-12-06 18:32:33.872320] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CbCNiDTonl': 0100666 00:20:39.242 [2024-12-06 18:32:33.872344] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:39.242 request: 00:20:39.242 { 00:20:39.242 "name": "key0", 00:20:39.242 "path": "/tmp/tmp.CbCNiDTonl", 00:20:39.242 "method": "keyring_file_add_key", 00:20:39.242 "req_id": 1 00:20:39.242 } 00:20:39.242 Got JSON-RPC error response 00:20:39.242 response: 00:20:39.242 { 00:20:39.242 "code": -1, 00:20:39.242 "message": "Operation not permitted" 00:20:39.242 } 00:20:39.242 18:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.503 [2024-12-06 18:32:34.040809] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.503 [2024-12-06 18:32:34.040834] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:39.503 request: 00:20:39.503 { 00:20:39.503 "name": "TLSTEST", 00:20:39.503 "trtype": "tcp", 00:20:39.503 "traddr": "10.0.0.2", 00:20:39.503 "adrfam": "ipv4", 00:20:39.503 "trsvcid": "4420", 00:20:39.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.503 "prchk_reftag": false, 00:20:39.503 "prchk_guard": false, 00:20:39.503 "hdgst": false, 00:20:39.503 "ddgst": false, 00:20:39.503 "psk": "key0", 00:20:39.503 "allow_unrecognized_csi": false, 00:20:39.503 "method": "bdev_nvme_attach_controller", 00:20:39.503 "req_id": 1 00:20:39.503 } 00:20:39.503 Got JSON-RPC error response 00:20:39.503 response: 00:20:39.503 { 00:20:39.503 "code": -126, 00:20:39.503 "message": "Required key not available" 00:20:39.503 } 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2154489 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2154489 ']' 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2154489 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154489 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154489' 00:20:39.503 killing process with pid 2154489 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2154489 00:20:39.503 Received shutdown signal, test time was about 10.000000 seconds 00:20:39.503 00:20:39.503 Latency(us) 00:20:39.503 [2024-12-06T17:32:34.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.503 [2024-12-06T17:32:34.287Z] =================================================================================================================== 00:20:39.503 [2024-12-06T17:32:34.287Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:39.503 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2154489 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2151929 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2151929 ']' 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2151929 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.504 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2151929 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2151929' 00:20:39.765 killing process with pid 2151929 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2151929 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2151929 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2154754 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2154754 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2154754 ']' 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.765 18:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.765 [2024-12-06 18:32:34.471303] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:39.765 [2024-12-06 18:32:34.471362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.025 [2024-12-06 18:32:34.560047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.025 [2024-12-06 18:32:34.590110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.025 [2024-12-06 18:32:34.590139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.025 [2024-12-06 18:32:34.590145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.025 [2024-12-06 18:32:34.590150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.025 [2024-12-06 18:32:34.590158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.025 [2024-12-06 18:32:34.590615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CbCNiDTonl 00:20:40.597 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.856 [2024-12-06 18:32:35.455242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.856 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.856 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.115 [2024-12-06 18:32:35.759984] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.115 [2024-12-06 18:32:35.760169] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.115 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.375 malloc0 00:20:41.375 18:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.375 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:41.635 [2024-12-06 18:32:36.267195] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.CbCNiDTonl': 0100666 00:20:41.635 [2024-12-06 18:32:36.267219] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:41.635 request: 00:20:41.635 { 00:20:41.635 "name": "key0", 00:20:41.635 "path": "/tmp/tmp.CbCNiDTonl", 00:20:41.635 "method": "keyring_file_add_key", 00:20:41.635 "req_id": 1 00:20:41.635 } 00:20:41.635 Got JSON-RPC error response 00:20:41.635 response: 00:20:41.635 { 00:20:41.635 "code": -1, 00:20:41.635 "message": "Operation not permitted" 00:20:41.635 } 00:20:41.635 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.895 [2024-12-06 18:32:36.435626] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:41.895 [2024-12-06 18:32:36.435663] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:41.895 request: 00:20:41.895 { 00:20:41.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.895 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.895 "psk": "key0", 00:20:41.895 "method": "nvmf_subsystem_add_host", 00:20:41.895 "req_id": 1 00:20:41.895 } 00:20:41.895 Got JSON-RPC error response 00:20:41.895 response: 00:20:41.895 { 00:20:41.895 "code": -32603, 00:20:41.895 "message": "Internal error" 00:20:41.895 } 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2154754 ']' 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154754' 00:20:41.895 killing process with pid 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2154754 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.CbCNiDTonl 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2155356 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2155356 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2155356 ']' 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.895 18:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.155 [2024-12-06 18:32:36.692373] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:42.155 [2024-12-06 18:32:36.692457] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.155 [2024-12-06 18:32:36.784104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.155 [2024-12-06 18:32:36.812147] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.155 [2024-12-06 18:32:36.812175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.155 [2024-12-06 18:32:36.812181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.155 [2024-12-06 18:32:36.812185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.155 [2024-12-06 18:32:36.812190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.155 [2024-12-06 18:32:36.812626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.726 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.726 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:42.726 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.726 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.726 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.987 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.987 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:42.987 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CbCNiDTonl 00:20:42.987 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.988 [2024-12-06 18:32:37.664751] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.988 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.248 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.248 [2024-12-06 18:32:37.981525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.248 [2024-12-06 18:32:37.981719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.248 18:32:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.509 malloc0 00:20:43.509 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:43.769 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:43.769 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2155720 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2155720 /var/tmp/bdevperf.sock 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2155720 ']' 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.029 18:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.029 [2024-12-06 18:32:38.700920] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:44.029 [2024-12-06 18:32:38.700974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155720 ] 00:20:44.029 [2024-12-06 18:32:38.784126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.289 [2024-12-06 18:32:38.813564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.857 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.857 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.857 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:20:45.118 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:45.118 [2024-12-06 18:32:39.801270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.118 TLSTESTn1 00:20:45.377 18:32:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.637 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:45.637 "subsystems": [ 00:20:45.637 { 00:20:45.637 "subsystem": "keyring", 00:20:45.637 "config": [ 00:20:45.637 { 00:20:45.637 "method": "keyring_file_add_key", 00:20:45.637 "params": { 00:20:45.637 "name": "key0", 00:20:45.637 "path": "/tmp/tmp.CbCNiDTonl" 00:20:45.637 } 00:20:45.637 } 00:20:45.637 ] 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "subsystem": "iobuf", 00:20:45.637 "config": [ 00:20:45.637 { 00:20:45.637 "method": "iobuf_set_options", 00:20:45.637 "params": { 00:20:45.637 "small_pool_count": 8192, 00:20:45.637 "large_pool_count": 1024, 00:20:45.637 "small_bufsize": 8192, 00:20:45.637 "large_bufsize": 135168, 00:20:45.637 "enable_numa": false 00:20:45.637 } 00:20:45.637 } 00:20:45.637 ] 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "subsystem": "sock", 00:20:45.637 "config": [ 00:20:45.637 { 00:20:45.637 "method": "sock_set_default_impl", 00:20:45.637 "params": { 00:20:45.637 "impl_name": "posix" 00:20:45.637 } 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "method": "sock_impl_set_options", 00:20:45.637 "params": { 00:20:45.637 "impl_name": "ssl", 00:20:45.637 "recv_buf_size": 4096, 00:20:45.637 "send_buf_size": 4096, 00:20:45.637 "enable_recv_pipe": true, 00:20:45.637 "enable_quickack": false, 00:20:45.637 "enable_placement_id": 0, 00:20:45.637 "enable_zerocopy_send_server": true, 00:20:45.637 "enable_zerocopy_send_client": false, 00:20:45.637 "zerocopy_threshold": 0, 00:20:45.637 "tls_version": 0, 00:20:45.637 "enable_ktls": false 00:20:45.637 } 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "method": "sock_impl_set_options", 00:20:45.637 "params": { 00:20:45.637 "impl_name": "posix", 00:20:45.637 "recv_buf_size": 2097152, 00:20:45.637 "send_buf_size": 2097152, 00:20:45.637 "enable_recv_pipe": true, 00:20:45.637 "enable_quickack": false, 00:20:45.637 "enable_placement_id": 0, 00:20:45.637 "enable_zerocopy_send_server": true, 00:20:45.637 "enable_zerocopy_send_client": false, 00:20:45.637 "zerocopy_threshold": 0, 00:20:45.637 "tls_version": 0, 00:20:45.637 "enable_ktls": false 00:20:45.637 } 00:20:45.637 } 00:20:45.637 ] 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "subsystem": "vmd", 00:20:45.637 "config": [] 00:20:45.637 }, 00:20:45.637 { 00:20:45.637 "subsystem": "accel", 00:20:45.637 "config": [ 00:20:45.637 { 00:20:45.637 "method": "accel_set_options", 00:20:45.637 "params": { 00:20:45.637 "small_cache_size": 128, 00:20:45.637 "large_cache_size": 16, 00:20:45.637 "task_count": 2048, 00:20:45.637 "sequence_count": 2048, 00:20:45.637 "buf_count": 2048 00:20:45.637 } 00:20:45.637 } 00:20:45.637 ] 00:20:45.637 }, 00:20:45.638 { 00:20:45.638 "subsystem": "bdev", 00:20:45.638 "config": [ 00:20:45.638 { 00:20:45.638 "method": "bdev_set_options", 00:20:45.638 "params": { 00:20:45.638 "bdev_io_pool_size": 65535, 00:20:45.638 "bdev_io_cache_size": 256, 00:20:45.638 "bdev_auto_examine": true, 00:20:45.638 "iobuf_small_cache_size": 128, 00:20:45.638 "iobuf_large_cache_size": 16 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_raid_set_options", 00:20:45.638 "params": { 00:20:45.638 "process_window_size_kb": 1024, 00:20:45.638 "process_max_bandwidth_mb_sec": 0 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_iscsi_set_options", 00:20:45.638 "params": { 00:20:45.638 "timeout_sec": 30 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_nvme_set_options", 00:20:45.638 "params": { 00:20:45.638 "action_on_timeout": "none", 00:20:45.638 "timeout_us": 0, 00:20:45.638 "timeout_admin_us": 0, 00:20:45.638 "keep_alive_timeout_ms": 10000, 00:20:45.638 "arbitration_burst": 0, 00:20:45.638 "low_priority_weight": 0, 00:20:45.638 "medium_priority_weight": 0, 00:20:45.638 "high_priority_weight": 0, 00:20:45.638 "nvme_adminq_poll_period_us": 10000, 00:20:45.638 "nvme_ioq_poll_period_us": 0, 00:20:45.638 "io_queue_requests": 0, 00:20:45.638 "delay_cmd_submit": true, 00:20:45.638 "transport_retry_count": 4, 00:20:45.638 "bdev_retry_count": 3, 00:20:45.638 "transport_ack_timeout": 0, 00:20:45.638 "ctrlr_loss_timeout_sec": 0, 00:20:45.638 "reconnect_delay_sec": 0, 00:20:45.638 "fast_io_fail_timeout_sec": 0, 00:20:45.638 "disable_auto_failback": false, 00:20:45.638 "generate_uuids": false, 00:20:45.638 "transport_tos": 0, 00:20:45.638 "nvme_error_stat": false, 00:20:45.638 "rdma_srq_size": 0, 00:20:45.638 "io_path_stat": false, 00:20:45.638 "allow_accel_sequence": false, 00:20:45.638 "rdma_max_cq_size": 0, 00:20:45.638 "rdma_cm_event_timeout_ms": 0, 00:20:45.638 "dhchap_digests": [ 00:20:45.638 "sha256", 00:20:45.638 "sha384", 00:20:45.638 "sha512" 00:20:45.638 ], 00:20:45.638 "dhchap_dhgroups": [ 00:20:45.638 "null", 00:20:45.638 "ffdhe2048", 00:20:45.638 "ffdhe3072", 00:20:45.638 "ffdhe4096", 00:20:45.638 "ffdhe6144", 00:20:45.638 "ffdhe8192" 00:20:45.638 ] 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_nvme_set_hotplug", 00:20:45.638 "params": { 00:20:45.638 "period_us": 100000, 00:20:45.638 "enable": false 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_malloc_create", 00:20:45.638 "params": { 00:20:45.638 "name": "malloc0", 00:20:45.638 "num_blocks": 8192, 00:20:45.638 "block_size": 4096, 00:20:45.638 "physical_block_size": 4096, 00:20:45.638 "uuid": "6635ef54-2131-4928-a6cd-c9dac786d819", 00:20:45.638 "optimal_io_boundary": 0, 00:20:45.638 "md_size": 0, 00:20:45.638 "dif_type": 0, 00:20:45.638 "dif_is_head_of_md": false, 00:20:45.638 "dif_pi_format": 0 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "bdev_wait_for_examine" 00:20:45.638 } 00:20:45.638 ] 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "subsystem": "nbd", 00:20:45.638 "config": [] 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "subsystem": "scheduler", 00:20:45.638 "config": [ 00:20:45.638 { 00:20:45.638 "method": "framework_set_scheduler", 00:20:45.638 "params": { 00:20:45.638 "name": "static" 00:20:45.638 } 00:20:45.638 } 00:20:45.638 ] 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "subsystem": "nvmf", 00:20:45.638 "config": [ 00:20:45.638 { 00:20:45.638 "method": "nvmf_set_config", 00:20:45.638 "params": { 00:20:45.638 "discovery_filter": "match_any", 00:20:45.638 "admin_cmd_passthru": { 00:20:45.638 "identify_ctrlr": false 00:20:45.638 }, 00:20:45.638 "dhchap_digests": [ 00:20:45.638 "sha256", 00:20:45.638 "sha384", 00:20:45.638 "sha512" 00:20:45.638 ], 00:20:45.638 "dhchap_dhgroups": [ 00:20:45.638 "null", 00:20:45.638 "ffdhe2048", 00:20:45.638 "ffdhe3072", 00:20:45.638 "ffdhe4096", 00:20:45.638 "ffdhe6144", 00:20:45.638 "ffdhe8192" 00:20:45.638 ] 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_set_max_subsystems", 00:20:45.638 "params": { 00:20:45.638 "max_subsystems": 1024 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_set_crdt", 00:20:45.638 "params": { 00:20:45.638 "crdt1": 0, 00:20:45.638 "crdt2": 0, 00:20:45.638 "crdt3": 0 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_create_transport", 00:20:45.638 "params": { 00:20:45.638 "trtype": "TCP", 00:20:45.638 "max_queue_depth": 128, 00:20:45.638 "max_io_qpairs_per_ctrlr": 127, 00:20:45.638 "in_capsule_data_size": 4096, 00:20:45.638 "max_io_size": 131072, 00:20:45.638 "io_unit_size": 131072, 00:20:45.638 "max_aq_depth": 128, 00:20:45.638 "num_shared_buffers": 511, 00:20:45.638 "buf_cache_size": 4294967295, 00:20:45.638 "dif_insert_or_strip": false, 00:20:45.638 "zcopy": false, 00:20:45.638 "c2h_success": false, 00:20:45.638 "sock_priority": 0, 00:20:45.638 "abort_timeout_sec": 1, 00:20:45.638 "ack_timeout": 0, 00:20:45.638 "data_wr_pool_size": 0 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_create_subsystem", 00:20:45.638 "params": { 00:20:45.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.638 "allow_any_host": false, 00:20:45.638 "serial_number": "SPDK00000000000001", 00:20:45.638 "model_number": "SPDK bdev Controller", 00:20:45.638 "max_namespaces": 10, 00:20:45.638 "min_cntlid": 1, 00:20:45.638 "max_cntlid": 65519, 00:20:45.638 "ana_reporting": false 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_subsystem_add_host", 00:20:45.638 "params": { 00:20:45.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.638 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.638 "psk": "key0" 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_subsystem_add_ns", 00:20:45.638 "params": { 00:20:45.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.638 "namespace": { 00:20:45.638 "nsid": 1, 00:20:45.638 "bdev_name": "malloc0", 00:20:45.638 "nguid": "6635EF5421314928A6CDC9DAC786D819", 00:20:45.638 "uuid": "6635ef54-2131-4928-a6cd-c9dac786d819", 00:20:45.638 "no_auto_visible": false 00:20:45.638 } 00:20:45.638 } 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "method": "nvmf_subsystem_add_listener", 00:20:45.638 "params": { 00:20:45.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.638 "listen_address": { 00:20:45.638 "trtype": "TCP", 00:20:45.638 "adrfam": "IPv4", 00:20:45.638 "traddr": "10.0.0.2", 00:20:45.638 "trsvcid": "4420" 00:20:45.638 }, 00:20:45.638 "secure_channel": true 00:20:45.638 } 00:20:45.638 } 00:20:45.638 ] 00:20:45.638 } 00:20:45.638 ] 00:20:45.638 }' 00:20:45.638 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.638 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:45.638 "subsystems": [ 00:20:45.638 { 00:20:45.638 "subsystem": "keyring", 00:20:45.638 "config": [ 00:20:45.638 { 00:20:45.638 "method": "keyring_file_add_key", 00:20:45.638 "params": { 00:20:45.638 "name": "key0", 00:20:45.638 "path": "/tmp/tmp.CbCNiDTonl" 00:20:45.638 } 00:20:45.638 } 00:20:45.638 ] 00:20:45.638 }, 00:20:45.638 { 00:20:45.638 "subsystem": "iobuf", 00:20:45.638 "config": [ 00:20:45.638 { 00:20:45.639 "method": "iobuf_set_options", 00:20:45.639 "params": { 00:20:45.639 "small_pool_count": 8192, 00:20:45.639 "large_pool_count": 1024, 00:20:45.639 "small_bufsize": 8192, 00:20:45.639 "large_bufsize": 135168, 00:20:45.639 "enable_numa": false 00:20:45.639 } 00:20:45.639 } 00:20:45.639 ] 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "subsystem": "sock", 00:20:45.639 "config": [ 00:20:45.639 { 00:20:45.639 "method": "sock_set_default_impl", 00:20:45.639 "params": { 00:20:45.639 "impl_name": "posix" 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "sock_impl_set_options", 00:20:45.639 "params": { 00:20:45.639 "impl_name": "ssl", 00:20:45.639 "recv_buf_size": 4096, 00:20:45.639 "send_buf_size": 4096, 00:20:45.639 "enable_recv_pipe": true, 00:20:45.639 "enable_quickack": false, 00:20:45.639 "enable_placement_id": 0, 00:20:45.639 "enable_zerocopy_send_server": true, 00:20:45.639 "enable_zerocopy_send_client": false, 00:20:45.639 "zerocopy_threshold": 0, 00:20:45.639 "tls_version": 0, 00:20:45.639 "enable_ktls": false 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "sock_impl_set_options", 00:20:45.639 "params": { 00:20:45.639 "impl_name": "posix", 00:20:45.639 "recv_buf_size": 2097152, 00:20:45.639 "send_buf_size": 2097152, 00:20:45.639 "enable_recv_pipe": true, 00:20:45.639 "enable_quickack": false, 00:20:45.639 "enable_placement_id": 0, 00:20:45.639 "enable_zerocopy_send_server": true, 00:20:45.639 "enable_zerocopy_send_client": false, 00:20:45.639 "zerocopy_threshold": 0, 00:20:45.639 "tls_version": 0, 00:20:45.639 "enable_ktls": false 00:20:45.639 } 00:20:45.639 } 00:20:45.639 ] 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "subsystem": "vmd", 00:20:45.639 "config": [] 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "subsystem": "accel", 00:20:45.639 "config": [ 00:20:45.639 { 00:20:45.639 "method": "accel_set_options", 00:20:45.639 "params": { 00:20:45.639 "small_cache_size": 128, 00:20:45.639 "large_cache_size": 16, 00:20:45.639 "task_count": 2048, 00:20:45.639 "sequence_count": 2048, 00:20:45.639 "buf_count": 2048 00:20:45.639 } 00:20:45.639 } 00:20:45.639 ] 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "subsystem": "bdev", 00:20:45.639 "config": [ 00:20:45.639 { 00:20:45.639 "method": "bdev_set_options", 00:20:45.639 "params": { 00:20:45.639 "bdev_io_pool_size": 65535, 00:20:45.639 "bdev_io_cache_size": 256, 00:20:45.639 "bdev_auto_examine": true, 00:20:45.639 "iobuf_small_cache_size": 128, 00:20:45.639 "iobuf_large_cache_size": 16 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_raid_set_options", 00:20:45.639 "params": { 00:20:45.639 "process_window_size_kb": 1024, 00:20:45.639 "process_max_bandwidth_mb_sec": 0 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_iscsi_set_options", 00:20:45.639 "params": { 00:20:45.639 "timeout_sec": 30 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_nvme_set_options", 00:20:45.639 "params": { 00:20:45.639 "action_on_timeout": "none", 00:20:45.639 "timeout_us": 0, 00:20:45.639 "timeout_admin_us": 0, 00:20:45.639 "keep_alive_timeout_ms": 10000, 00:20:45.639 "arbitration_burst": 0, 00:20:45.639 "low_priority_weight": 0, 00:20:45.639 "medium_priority_weight": 0, 00:20:45.639 "high_priority_weight": 0, 00:20:45.639 "nvme_adminq_poll_period_us": 10000, 00:20:45.639 "nvme_ioq_poll_period_us": 0, 00:20:45.639 "io_queue_requests": 512, 00:20:45.639 "delay_cmd_submit": true, 00:20:45.639 "transport_retry_count": 4, 00:20:45.639 "bdev_retry_count": 3, 00:20:45.639 "transport_ack_timeout": 0, 00:20:45.639 "ctrlr_loss_timeout_sec": 0, 00:20:45.639 "reconnect_delay_sec": 0, 00:20:45.639 "fast_io_fail_timeout_sec": 0, 00:20:45.639 "disable_auto_failback": false, 00:20:45.639 "generate_uuids": false, 00:20:45.639 "transport_tos": 0, 00:20:45.639 "nvme_error_stat": false, 00:20:45.639 "rdma_srq_size": 0, 00:20:45.639 "io_path_stat": false, 00:20:45.639 "allow_accel_sequence": false, 00:20:45.639 "rdma_max_cq_size": 0, 00:20:45.639 "rdma_cm_event_timeout_ms": 0, 00:20:45.639 "dhchap_digests": [ 00:20:45.639 "sha256", 00:20:45.639 "sha384", 00:20:45.639 "sha512" 00:20:45.639 ], 00:20:45.639 "dhchap_dhgroups": [ 00:20:45.639 "null", 00:20:45.639 "ffdhe2048", 00:20:45.639 "ffdhe3072", 00:20:45.639 "ffdhe4096", 00:20:45.639 "ffdhe6144", 00:20:45.639 "ffdhe8192" 00:20:45.639 ] 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_nvme_attach_controller", 00:20:45.639 "params": { 00:20:45.639 "name": "TLSTEST", 00:20:45.639 "trtype": "TCP", 00:20:45.639 "adrfam": "IPv4", 00:20:45.639 "traddr": "10.0.0.2", 00:20:45.639 "trsvcid": "4420", 00:20:45.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.639 "prchk_reftag": false, 00:20:45.639 "prchk_guard": false, 00:20:45.639 "ctrlr_loss_timeout_sec": 0, 00:20:45.639 "reconnect_delay_sec": 0, 00:20:45.639 "fast_io_fail_timeout_sec": 0, 00:20:45.639 "psk": "key0", 00:20:45.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.639 "hdgst": false, 00:20:45.639 "ddgst": false, 00:20:45.639 "multipath": "multipath" 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_nvme_set_hotplug", 00:20:45.639 "params": { 00:20:45.639 "period_us": 100000, 00:20:45.639 "enable": false 00:20:45.639 } 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "method": "bdev_wait_for_examine" 00:20:45.639 } 00:20:45.639 ] 00:20:45.639 }, 00:20:45.639 { 00:20:45.639 "subsystem": "nbd", 00:20:45.639 "config": [] 00:20:45.639 } 00:20:45.639 ] 00:20:45.639 }' 00:20:45.639 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2155720 00:20:45.639 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2155720 ']' 00:20:45.639 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2155720 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155720 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155720' 00:20:45.900 killing process with pid 2155720 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2155720 00:20:45.900 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.900 00:20:45.900 Latency(us) 00:20:45.900 [2024-12-06T17:32:40.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.900 [2024-12-06T17:32:40.684Z] =================================================================================================================== 00:20:45.900 [2024-12-06T17:32:40.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2155720 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2155356 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2155356 ']' 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2155356 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155356 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155356' 00:20:45.900 killing process with pid 2155356 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2155356 00:20:45.900 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2155356 00:20:46.161 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:46.161 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.161 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.161 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.161 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:46.161 "subsystems": [ 00:20:46.161 { 00:20:46.161 "subsystem": "keyring", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "keyring_file_add_key", 00:20:46.161 "params": { 00:20:46.161 "name": "key0", 00:20:46.161 "path": "/tmp/tmp.CbCNiDTonl" 00:20:46.161 } 00:20:46.161 } 00:20:46.161 ] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "iobuf", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "iobuf_set_options", 00:20:46.161 "params": { 00:20:46.161 "small_pool_count": 8192, 00:20:46.161 "large_pool_count": 1024, 00:20:46.161 "small_bufsize": 8192, 00:20:46.161 "large_bufsize": 135168, 00:20:46.161 "enable_numa": false 00:20:46.161 } 00:20:46.161 } 00:20:46.161 ] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "sock", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "sock_set_default_impl", 00:20:46.161 "params": { 00:20:46.161 "impl_name": "posix" 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "sock_impl_set_options", 00:20:46.161 "params": { 00:20:46.161 "impl_name": "ssl", 00:20:46.161 "recv_buf_size": 4096, 00:20:46.161 "send_buf_size": 4096, 00:20:46.161 "enable_recv_pipe": true, 00:20:46.161 "enable_quickack": false, 00:20:46.161 "enable_placement_id": 0, 00:20:46.161 "enable_zerocopy_send_server": true, 00:20:46.161 "enable_zerocopy_send_client": false, 00:20:46.161 "zerocopy_threshold": 0, 00:20:46.161 "tls_version": 0, 00:20:46.161 "enable_ktls": false 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "sock_impl_set_options", 00:20:46.161 "params": { 00:20:46.161 "impl_name": "posix", 00:20:46.161 "recv_buf_size": 2097152, 00:20:46.161 "send_buf_size": 2097152, 00:20:46.161 "enable_recv_pipe": true, 00:20:46.161 "enable_quickack": false, 00:20:46.161 "enable_placement_id": 0, 00:20:46.161 "enable_zerocopy_send_server": true, 00:20:46.161 "enable_zerocopy_send_client": false, 00:20:46.161 "zerocopy_threshold": 0, 00:20:46.161 "tls_version": 0, 00:20:46.161 "enable_ktls": false 00:20:46.161 } 00:20:46.161 } 00:20:46.161 ] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "vmd", 00:20:46.161 "config": [] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "accel", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "accel_set_options", 00:20:46.161 "params": { 00:20:46.161 "small_cache_size": 128, 00:20:46.161 "large_cache_size": 16, 00:20:46.161 "task_count": 2048, 00:20:46.161 "sequence_count": 2048, 00:20:46.161 "buf_count": 2048 00:20:46.161 } 00:20:46.161 } 00:20:46.161 ] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "bdev", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "bdev_set_options", 00:20:46.161 "params": { 00:20:46.161 "bdev_io_pool_size": 65535, 00:20:46.161 "bdev_io_cache_size": 256, 00:20:46.161 "bdev_auto_examine": true, 00:20:46.161 "iobuf_small_cache_size": 128, 00:20:46.161 "iobuf_large_cache_size": 16 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_raid_set_options", 00:20:46.161 "params": { 00:20:46.161 "process_window_size_kb": 1024, 00:20:46.161 "process_max_bandwidth_mb_sec": 0 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_iscsi_set_options", 00:20:46.161 "params": { 00:20:46.161 "timeout_sec": 30 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_nvme_set_options", 00:20:46.161 "params": { 00:20:46.161 "action_on_timeout": "none", 00:20:46.161 "timeout_us": 0, 00:20:46.161 "timeout_admin_us": 0, 00:20:46.161 "keep_alive_timeout_ms": 10000, 00:20:46.161 "arbitration_burst": 0, 00:20:46.161 "low_priority_weight": 0, 00:20:46.161 "medium_priority_weight": 0, 00:20:46.161 "high_priority_weight": 0, 00:20:46.161 "nvme_adminq_poll_period_us": 10000, 00:20:46.161 "nvme_ioq_poll_period_us": 0, 00:20:46.161 "io_queue_requests": 0, 00:20:46.161 "delay_cmd_submit": true, 00:20:46.161 "transport_retry_count": 4, 00:20:46.161 "bdev_retry_count": 3, 00:20:46.161 "transport_ack_timeout": 0, 00:20:46.161 "ctrlr_loss_timeout_sec": 0, 00:20:46.161 "reconnect_delay_sec": 0, 00:20:46.161 "fast_io_fail_timeout_sec": 0, 00:20:46.161 "disable_auto_failback": false, 00:20:46.161 "generate_uuids": false, 00:20:46.161 "transport_tos": 0, 00:20:46.161 "nvme_error_stat": false, 00:20:46.161 "rdma_srq_size": 0, 00:20:46.161 "io_path_stat": false, 00:20:46.161 "allow_accel_sequence": false, 00:20:46.161 "rdma_max_cq_size": 0, 00:20:46.161 "rdma_cm_event_timeout_ms": 0, 00:20:46.161 "dhchap_digests": [ 00:20:46.161 "sha256", 00:20:46.161 "sha384", 00:20:46.161 "sha512" 00:20:46.161 ], 00:20:46.161 "dhchap_dhgroups": [ 00:20:46.161 "null", 00:20:46.161 "ffdhe2048", 00:20:46.161 "ffdhe3072", 00:20:46.161 "ffdhe4096", 00:20:46.161 "ffdhe6144", 00:20:46.161 "ffdhe8192" 00:20:46.161 ] 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_nvme_set_hotplug", 00:20:46.161 "params": { 00:20:46.161 "period_us": 100000, 00:20:46.161 "enable": false 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_malloc_create", 00:20:46.161 "params": { 00:20:46.161 "name": "malloc0", 00:20:46.161 "num_blocks": 8192, 00:20:46.161 "block_size": 4096, 00:20:46.161 "physical_block_size": 4096, 00:20:46.161 "uuid": "6635ef54-2131-4928-a6cd-c9dac786d819", 00:20:46.161 "optimal_io_boundary": 0, 00:20:46.161 "md_size": 0, 00:20:46.161 "dif_type": 0, 00:20:46.161 "dif_is_head_of_md": false, 00:20:46.161 "dif_pi_format": 0 00:20:46.161 } 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "method": "bdev_wait_for_examine" 00:20:46.161 } 00:20:46.161 ] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "nbd", 00:20:46.161 "config": [] 00:20:46.161 }, 00:20:46.161 { 00:20:46.161 "subsystem": "scheduler", 00:20:46.161 "config": [ 00:20:46.161 { 00:20:46.161 "method": "framework_set_scheduler", 00:20:46.161 "params": { 00:20:46.161 "name": "static" 00:20:46.161 } 00:20:46.161 } 00:20:46.161 ] 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "subsystem": "nvmf", 00:20:46.162 "config": [ 00:20:46.162 { 00:20:46.162 "method": "nvmf_set_config", 00:20:46.162 "params": { 00:20:46.162 "discovery_filter": "match_any", 00:20:46.162 "admin_cmd_passthru": { 00:20:46.162 "identify_ctrlr": false 00:20:46.162 }, 00:20:46.162 "dhchap_digests": [ 00:20:46.162 "sha256", 00:20:46.162 "sha384", 00:20:46.162 "sha512" 00:20:46.162 ], 00:20:46.162 "dhchap_dhgroups": [ 00:20:46.162 "null", 00:20:46.162 "ffdhe2048", 00:20:46.162 "ffdhe3072", 00:20:46.162 "ffdhe4096", 00:20:46.162 "ffdhe6144", 00:20:46.162 "ffdhe8192" 00:20:46.162 ] 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_set_max_subsystems", 00:20:46.162 "params": { 00:20:46.162 "max_subsystems": 1024 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_set_crdt", 00:20:46.162 "params": { 00:20:46.162 "crdt1": 0, 00:20:46.162 "crdt2": 0, 00:20:46.162 "crdt3": 0 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_create_transport", 00:20:46.162 "params": { 00:20:46.162 "trtype": "TCP", 00:20:46.162 "max_queue_depth": 128, 00:20:46.162 "max_io_qpairs_per_ctrlr": 127, 00:20:46.162 "in_capsule_data_size": 4096, 00:20:46.162 "max_io_size": 131072, 00:20:46.162 "io_unit_size": 131072, 00:20:46.162 "max_aq_depth": 128, 00:20:46.162 "num_shared_buffers": 511, 00:20:46.162 "buf_cache_size": 4294967295, 00:20:46.162 "dif_insert_or_strip": false, 00:20:46.162 "zcopy": false, 00:20:46.162 "c2h_success": false, 00:20:46.162 "sock_priority": 0, 00:20:46.162 "abort_timeout_sec": 1, 00:20:46.162 "ack_timeout": 0, 00:20:46.162 "data_wr_pool_size": 0 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_create_subsystem", 00:20:46.162 "params": { 00:20:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.162 "allow_any_host": false, 00:20:46.162 "serial_number": "SPDK00000000000001", 00:20:46.162 "model_number": "SPDK bdev Controller", 00:20:46.162 "max_namespaces": 10, 00:20:46.162 "min_cntlid": 1, 00:20:46.162 "max_cntlid": 65519, 00:20:46.162 "ana_reporting": false 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_subsystem_add_host", 00:20:46.162 "params": { 00:20:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.162 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.162 "psk": "key0" 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_subsystem_add_ns", 00:20:46.162 "params": { 00:20:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.162 "namespace": { 00:20:46.162 "nsid": 1, 00:20:46.162 "bdev_name": "malloc0", 00:20:46.162 "nguid": "6635EF5421314928A6CDC9DAC786D819", 00:20:46.162 "uuid": "6635ef54-2131-4928-a6cd-c9dac786d819", 00:20:46.162 "no_auto_visible": false 00:20:46.162 } 00:20:46.162 } 00:20:46.162 }, 00:20:46.162 { 00:20:46.162 "method": "nvmf_subsystem_add_listener", 00:20:46.162 "params": { 00:20:46.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.162 "listen_address": { 00:20:46.162 "trtype": "TCP", 00:20:46.162 "adrfam": "IPv4", 00:20:46.162 "traddr": "10.0.0.2", 00:20:46.162 "trsvcid": "4420" 00:20:46.162 }, 00:20:46.162 "secure_channel": true 00:20:46.162 } 00:20:46.162 } 00:20:46.162 ] 00:20:46.162 } 00:20:46.162 ] 00:20:46.162 }' 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2156078 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2156078 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2156078 ']' 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.162 18:32:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.162 [2024-12-06 18:32:40.831309] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:46.162 [2024-12-06 18:32:40.831362] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.162 [2024-12-06 18:32:40.927389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.422 [2024-12-06 18:32:40.956726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.422 [2024-12-06 18:32:40.956758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.422 [2024-12-06 18:32:40.956763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.422 [2024-12-06 18:32:40.956768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.422 [2024-12-06 18:32:40.956772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.422 [2024-12-06 18:32:40.957250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.422 [2024-12-06 18:32:41.150896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.422 [2024-12-06 18:32:41.182921] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.422 [2024-12-06 18:32:41.183121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2156366 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2156366 /var/tmp/bdevperf.sock 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2156366 ']' 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.031 18:32:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:47.031 "subsystems": [ 00:20:47.031 { 00:20:47.031 "subsystem": "keyring", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "keyring_file_add_key", 00:20:47.031 "params": { 00:20:47.031 "name": "key0", 00:20:47.031 "path": "/tmp/tmp.CbCNiDTonl" 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "iobuf", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "iobuf_set_options", 00:20:47.031 "params": { 00:20:47.031 "small_pool_count": 8192, 00:20:47.031 "large_pool_count": 1024, 00:20:47.031 "small_bufsize": 8192, 00:20:47.031 "large_bufsize": 135168, 00:20:47.031 "enable_numa": false 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "sock", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "sock_set_default_impl", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "posix" 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "sock_impl_set_options", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "ssl", 00:20:47.031 "recv_buf_size": 4096, 00:20:47.031 "send_buf_size": 4096, 00:20:47.031 "enable_recv_pipe": true, 00:20:47.031 "enable_quickack": false, 00:20:47.031 "enable_placement_id": 0, 00:20:47.031 "enable_zerocopy_send_server": true, 00:20:47.031 "enable_zerocopy_send_client": false, 00:20:47.031 "zerocopy_threshold": 0, 00:20:47.031 "tls_version": 0, 00:20:47.031 "enable_ktls": false 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "sock_impl_set_options", 00:20:47.031 "params": { 00:20:47.031 "impl_name": "posix", 00:20:47.031 "recv_buf_size": 2097152, 00:20:47.031 "send_buf_size": 2097152, 00:20:47.031 "enable_recv_pipe": true, 00:20:47.031 "enable_quickack": false, 00:20:47.031 "enable_placement_id": 0, 00:20:47.031 "enable_zerocopy_send_server": true, 00:20:47.031 "enable_zerocopy_send_client": false, 00:20:47.031 "zerocopy_threshold": 0, 00:20:47.031 "tls_version": 0, 00:20:47.031 "enable_ktls": false 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "vmd", 00:20:47.031 "config": [] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "accel", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "accel_set_options", 00:20:47.031 "params": { 00:20:47.031 "small_cache_size": 128, 00:20:47.031 "large_cache_size": 16, 00:20:47.031 "task_count": 2048, 00:20:47.031 "sequence_count": 2048, 00:20:47.031 "buf_count": 2048 00:20:47.031 } 00:20:47.031 } 00:20:47.031 ] 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "subsystem": "bdev", 00:20:47.031 "config": [ 00:20:47.031 { 00:20:47.031 "method": "bdev_set_options", 00:20:47.031 "params": { 00:20:47.031 "bdev_io_pool_size": 65535, 00:20:47.031 "bdev_io_cache_size": 256, 00:20:47.031 "bdev_auto_examine": true, 00:20:47.031 "iobuf_small_cache_size": 128, 00:20:47.031 "iobuf_large_cache_size": 16 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_raid_set_options", 00:20:47.031 "params": { 00:20:47.031 "process_window_size_kb": 1024, 00:20:47.031 "process_max_bandwidth_mb_sec": 0 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_iscsi_set_options", 00:20:47.031 "params": { 00:20:47.031 "timeout_sec": 30 00:20:47.031 } 00:20:47.031 }, 00:20:47.031 { 00:20:47.031 "method": "bdev_nvme_set_options", 00:20:47.031 "params": { 00:20:47.031 "action_on_timeout": "none", 00:20:47.031 "timeout_us": 0, 00:20:47.031 "timeout_admin_us": 0, 00:20:47.031 "keep_alive_timeout_ms": 10000, 00:20:47.031 "arbitration_burst": 0, 00:20:47.031 "low_priority_weight": 0, 00:20:47.031 "medium_priority_weight": 0, 00:20:47.031 "high_priority_weight": 0, 00:20:47.031 "nvme_adminq_poll_period_us": 10000, 00:20:47.031 "nvme_ioq_poll_period_us": 0, 00:20:47.031 "io_queue_requests": 512, 00:20:47.031 "delay_cmd_submit": true, 00:20:47.031 "transport_retry_count": 4, 00:20:47.031 "bdev_retry_count": 3, 00:20:47.031 "transport_ack_timeout": 0, 00:20:47.031 "ctrlr_loss_timeout_sec": 0, 00:20:47.031 "reconnect_delay_sec": 0, 00:20:47.031 "fast_io_fail_timeout_sec": 0, 00:20:47.031 "disable_auto_failback": false, 00:20:47.031 "generate_uuids": false, 00:20:47.031 "transport_tos": 0, 00:20:47.031 "nvme_error_stat": false, 00:20:47.031 "rdma_srq_size": 0, 00:20:47.031 "io_path_stat": false, 00:20:47.031 "allow_accel_sequence": false, 00:20:47.031 "rdma_max_cq_size": 0, 00:20:47.031 "rdma_cm_event_timeout_ms": 0, 00:20:47.031 "dhchap_digests": [ 00:20:47.031 "sha256", 00:20:47.031 "sha384", 00:20:47.031 "sha512" 00:20:47.031 ], 00:20:47.031 "dhchap_dhgroups": [ 00:20:47.031 "null", 00:20:47.031 "ffdhe2048", 00:20:47.031 "ffdhe3072", 00:20:47.031 "ffdhe4096", 00:20:47.031 "ffdhe6144", 00:20:47.031 "ffdhe8192" 00:20:47.032 ] 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_nvme_attach_controller", 00:20:47.032 "params": { 00:20:47.032 "name": "TLSTEST", 00:20:47.032 "trtype": "TCP", 00:20:47.032 "adrfam": "IPv4", 00:20:47.032 "traddr": "10.0.0.2", 00:20:47.032 "trsvcid": "4420", 00:20:47.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.032 "prchk_reftag": false, 00:20:47.032 "prchk_guard": false, 00:20:47.032 "ctrlr_loss_timeout_sec": 0, 00:20:47.032 "reconnect_delay_sec": 0, 00:20:47.032 "fast_io_fail_timeout_sec": 0, 00:20:47.032 "psk": "key0", 00:20:47.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.032 "hdgst": false, 00:20:47.032 "ddgst": false, 00:20:47.032 "multipath": "multipath" 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_nvme_set_hotplug", 00:20:47.032 "params": { 00:20:47.032 "period_us": 100000, 00:20:47.032 "enable": false 00:20:47.032 } 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "method": "bdev_wait_for_examine" 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }, 00:20:47.032 { 00:20:47.032 "subsystem": "nbd", 00:20:47.032 "config": [] 00:20:47.032 } 00:20:47.032 ] 00:20:47.032 }' 00:20:47.032 [2024-12-06 18:32:41.711552] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:47.032 [2024-12-06 18:32:41.711608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156366 ] 00:20:47.032 [2024-12-06 18:32:41.797546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.291 [2024-12-06 18:32:41.826559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.291 [2024-12-06 18:32:41.961538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.860 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.860 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:47.860 18:32:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.860 Running I/O for 10 seconds... 00:20:49.819 6293.00 IOPS, 24.58 MiB/s [2024-12-06T17:32:45.985Z] 5968.50 IOPS, 23.31 MiB/s [2024-12-06T17:32:46.929Z] 5844.67 IOPS, 22.83 MiB/s [2024-12-06T17:32:47.871Z] 5843.25 IOPS, 22.83 MiB/s [2024-12-06T17:32:48.814Z] 5872.80 IOPS, 22.94 MiB/s [2024-12-06T17:32:49.756Z] 5820.83 IOPS, 22.74 MiB/s [2024-12-06T17:32:50.698Z] 5821.14 IOPS, 22.74 MiB/s [2024-12-06T17:32:51.641Z] 5819.12 IOPS, 22.73 MiB/s [2024-12-06T17:32:53.027Z] 5768.00 IOPS, 22.53 MiB/s [2024-12-06T17:32:53.027Z] 5751.60 IOPS, 22.47 MiB/s 00:20:58.243 Latency(us) 00:20:58.243 [2024-12-06T17:32:53.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.243 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.243 Verification LBA range: start 0x0 length 0x2000 00:20:58.243 TLSTESTn1 : 10.01 5756.53 22.49 0.00 0.00 22205.91 4696.75 53521.07 00:20:58.243 [2024-12-06T17:32:53.027Z] =================================================================================================================== 00:20:58.243 [2024-12-06T17:32:53.027Z] Total : 5756.53 22.49 0.00 0.00 22205.91 4696.75 53521.07 00:20:58.243 { 00:20:58.243 "results": [ 00:20:58.243 { 00:20:58.243 "job": "TLSTESTn1", 00:20:58.243 "core_mask": "0x4", 00:20:58.243 "workload": "verify", 00:20:58.243 "status": "finished", 00:20:58.243 "verify_range": { 00:20:58.243 "start": 0, 00:20:58.243 "length": 8192 00:20:58.243 }, 00:20:58.243 "queue_depth": 128, 00:20:58.243 "io_size": 4096, 00:20:58.243 "runtime": 10.013676, 00:20:58.243 "iops": 5756.52737316446, 00:20:58.243 "mibps": 22.486435051423673, 00:20:58.243 "io_failed": 0, 00:20:58.243 "io_timeout": 0, 00:20:58.243 "avg_latency_us": 22205.90838294821, 00:20:58.243 "min_latency_us": 4696.746666666667, 00:20:58.243 "max_latency_us": 53521.066666666666 00:20:58.243 } 00:20:58.243 ], 00:20:58.243 "core_count": 1 00:20:58.243 } 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2156366 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2156366 ']' 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2156366 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2156366 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2156366' 00:20:58.243 killing process with pid 2156366 00:20:58.243 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2156366 00:20:58.243 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.243 00:20:58.243 Latency(us) 00:20:58.243 [2024-12-06T17:32:53.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.244 [2024-12-06T17:32:53.028Z] =================================================================================================================== 00:20:58.244 [2024-12-06T17:32:53.028Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2156366 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2156078 ']' 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2156078' 00:20:58.244 killing process with pid 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2156078 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:58.244 18:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2158453 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2158453 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2158453 ']' 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.244 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.505 [2024-12-06 18:32:53.068140] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:20:58.505 [2024-12-06 18:32:53.068223] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.505 [2024-12-06 18:32:53.150471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.505 [2024-12-06 18:32:53.204451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.505 [2024-12-06 18:32:53.204524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.505 [2024-12-06 18:32:53.204535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.505 [2024-12-06 18:32:53.204544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.505 [2024-12-06 18:32:53.204551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.505 [2024-12-06 18:32:53.205575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.CbCNiDTonl 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.CbCNiDTonl 00:20:59.449 18:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:59.449 [2024-12-06 18:32:54.131643] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.449 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:59.711 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:59.972 [2024-12-06 18:32:54.520605] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.972 [2024-12-06 18:32:54.520932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.972 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:59.972 malloc0 00:20:59.972 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:00.234 18:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:21:00.494 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:00.753 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2158954 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2158954 /var/tmp/bdevperf.sock 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2158954 ']' 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.754 18:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.754 [2024-12-06 18:32:55.371854] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:00.754 [2024-12-06 18:32:55.371943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158954 ] 00:21:00.754 [2024-12-06 18:32:55.457932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.754 [2024-12-06 18:32:55.487622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.694 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.694 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:01.694 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:21:01.694 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:01.954 [2024-12-06 18:32:56.496369] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.954 nvme0n1 00:21:01.954 18:32:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.954 Running I/O for 1 seconds... 00:21:03.335 5023.00 IOPS, 19.62 MiB/s 00:21:03.335 Latency(us) 00:21:03.335 [2024-12-06T17:32:58.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.335 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:03.335 Verification LBA range: start 0x0 length 0x2000 00:21:03.335 nvme0n1 : 1.02 5065.82 19.79 0.00 0.00 25064.39 5434.03 50681.17 00:21:03.335 [2024-12-06T17:32:58.119Z] =================================================================================================================== 00:21:03.335 [2024-12-06T17:32:58.119Z] Total : 5065.82 19.79 0.00 0.00 25064.39 5434.03 50681.17 00:21:03.335 { 00:21:03.335 "results": [ 00:21:03.335 { 00:21:03.335 "job": "nvme0n1", 00:21:03.335 "core_mask": "0x2", 00:21:03.335 "workload": "verify", 00:21:03.335 "status": "finished", 00:21:03.335 "verify_range": { 00:21:03.335 "start": 0, 00:21:03.335 "length": 8192 00:21:03.335 }, 00:21:03.335 "queue_depth": 128, 00:21:03.335 "io_size": 4096, 00:21:03.335 "runtime": 1.016814, 00:21:03.335 "iops": 5065.823247909647, 00:21:03.335 "mibps": 19.788372062147058, 00:21:03.335 "io_failed": 0, 00:21:03.335 "io_timeout": 0, 00:21:03.335 "avg_latency_us": 25064.38962013848, 00:21:03.335 "min_latency_us": 5434.026666666667, 00:21:03.335 "max_latency_us": 50681.17333333333 00:21:03.335 } 00:21:03.335 ], 00:21:03.335 "core_count": 1 00:21:03.335 } 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2158954 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2158954 ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2158954 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2158954 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2158954' 00:21:03.336 killing process with pid 2158954 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2158954 00:21:03.336 Received shutdown signal, test time was about 1.000000 seconds 00:21:03.336 00:21:03.336 Latency(us) 00:21:03.336 [2024-12-06T17:32:58.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.336 [2024-12-06T17:32:58.120Z] =================================================================================================================== 00:21:03.336 [2024-12-06T17:32:58.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2158954 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2158453 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2158453 ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2158453 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2158453 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2158453' 00:21:03.336 killing process with pid 2158453 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2158453 00:21:03.336 18:32:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2158453 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2159497 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2159497 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2159497 ']' 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.336 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.595 [2024-12-06 18:32:58.122973] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:03.595 [2024-12-06 18:32:58.123023] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.595 [2024-12-06 18:32:58.212270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.595 [2024-12-06 18:32:58.241006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.595 [2024-12-06 18:32:58.241037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.595 [2024-12-06 18:32:58.241043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.595 [2024-12-06 18:32:58.241048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.595 [2024-12-06 18:32:58.241052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.595 [2024-12-06 18:32:58.241511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.163 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.163 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:04.163 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.163 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.163 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.422 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.422 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:04.422 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.422 18:32:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.422 [2024-12-06 18:32:58.970462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.422 malloc0 00:21:04.422 [2024-12-06 18:32:58.996436] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.422 [2024-12-06 18:32:58.996625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.422 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.422 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2159775 00:21:04.422 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2159775 /var/tmp/bdevperf.sock 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2159775 ']' 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.423 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.423 [2024-12-06 18:32:59.075926] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:04.423 [2024-12-06 18:32:59.075974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159775 ] 00:21:04.423 [2024-12-06 18:32:59.157316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.423 [2024-12-06 18:32:59.186964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.362 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.362 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:05.362 18:32:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.CbCNiDTonl 00:21:05.362 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:05.623 [2024-12-06 18:33:00.179539] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.623 nvme0n1 00:21:05.623 18:33:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.623 Running I/O for 1 seconds... 00:21:07.004 4620.00 IOPS, 18.05 MiB/s 00:21:07.004 Latency(us) 00:21:07.004 [2024-12-06T17:33:01.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.004 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:07.004 Verification LBA range: start 0x0 length 0x2000 00:21:07.004 nvme0n1 : 1.02 4665.94 18.23 0.00 0.00 27230.96 5133.65 79517.01 00:21:07.004 [2024-12-06T17:33:01.788Z] =================================================================================================================== 00:21:07.004 [2024-12-06T17:33:01.788Z] Total : 4665.94 18.23 0.00 0.00 27230.96 5133.65 79517.01 00:21:07.004 { 00:21:07.004 "results": [ 00:21:07.004 { 00:21:07.004 "job": "nvme0n1", 00:21:07.004 "core_mask": "0x2", 00:21:07.004 "workload": "verify", 00:21:07.004 "status": "finished", 00:21:07.004 "verify_range": { 00:21:07.004 "start": 0, 00:21:07.004 "length": 8192 00:21:07.004 }, 00:21:07.004 "queue_depth": 128, 00:21:07.004 "io_size": 4096, 00:21:07.004 "runtime": 1.017586, 00:21:07.004 "iops": 4665.944696566187, 00:21:07.004 "mibps": 18.22634647096167, 00:21:07.004 "io_failed": 0, 00:21:07.004 "io_timeout": 0, 00:21:07.004 "avg_latency_us": 27230.957955630438, 00:21:07.004 "min_latency_us": 5133.653333333334, 00:21:07.004 "max_latency_us": 79517.01333333334 00:21:07.004 } 00:21:07.004 ], 00:21:07.004 "core_count": 1 00:21:07.004 } 00:21:07.004 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:07.004 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.004 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.004 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.004 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:07.004 "subsystems": [ 00:21:07.004 { 00:21:07.004 "subsystem": "keyring", 00:21:07.004 "config": [ 00:21:07.004 { 00:21:07.004 "method": "keyring_file_add_key", 00:21:07.004 "params": { 00:21:07.004 "name": "key0", 00:21:07.004 "path": "/tmp/tmp.CbCNiDTonl" 00:21:07.004 } 00:21:07.004 } 00:21:07.004 ] 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "subsystem": "iobuf", 00:21:07.004 "config": [ 00:21:07.004 { 00:21:07.004 "method": "iobuf_set_options", 00:21:07.004 "params": { 00:21:07.004 "small_pool_count": 8192, 00:21:07.004 "large_pool_count": 1024, 00:21:07.004 "small_bufsize": 8192, 00:21:07.004 "large_bufsize": 135168, 00:21:07.004 "enable_numa": false 00:21:07.004 } 00:21:07.004 } 00:21:07.004 ] 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "subsystem": "sock", 00:21:07.004 "config": [ 00:21:07.004 { 00:21:07.004 "method": "sock_set_default_impl", 00:21:07.004 "params": { 00:21:07.004 "impl_name": "posix" 00:21:07.004 } 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "method": "sock_impl_set_options", 00:21:07.004 "params": { 00:21:07.004 "impl_name": "ssl", 00:21:07.004 "recv_buf_size": 4096, 00:21:07.004 "send_buf_size": 4096, 00:21:07.004 "enable_recv_pipe": true, 00:21:07.004 "enable_quickack": false, 00:21:07.004 "enable_placement_id": 0, 00:21:07.004 "enable_zerocopy_send_server": true, 00:21:07.004 "enable_zerocopy_send_client": false, 00:21:07.004 "zerocopy_threshold": 0, 00:21:07.004 "tls_version": 0, 00:21:07.004 "enable_ktls": false 00:21:07.004 } 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "method": "sock_impl_set_options", 00:21:07.004 "params": { 00:21:07.004 "impl_name": "posix", 00:21:07.004 "recv_buf_size": 2097152, 00:21:07.004 "send_buf_size": 2097152, 00:21:07.004 "enable_recv_pipe": true, 00:21:07.004 "enable_quickack": false, 00:21:07.004 "enable_placement_id": 0, 00:21:07.004 "enable_zerocopy_send_server": true, 00:21:07.004 "enable_zerocopy_send_client": false, 00:21:07.004 "zerocopy_threshold": 0, 00:21:07.004 "tls_version": 0, 00:21:07.004 "enable_ktls": false 00:21:07.004 } 00:21:07.004 } 00:21:07.004 ] 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "subsystem": "vmd", 00:21:07.004 "config": [] 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "subsystem": "accel", 00:21:07.004 "config": [ 00:21:07.004 { 00:21:07.004 "method": "accel_set_options", 00:21:07.004 "params": { 00:21:07.004 "small_cache_size": 128, 00:21:07.004 "large_cache_size": 16, 00:21:07.004 "task_count": 2048, 00:21:07.004 "sequence_count": 2048, 00:21:07.004 "buf_count": 2048 00:21:07.004 } 00:21:07.004 } 00:21:07.004 ] 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "subsystem": "bdev", 00:21:07.004 "config": [ 00:21:07.004 { 00:21:07.004 "method": "bdev_set_options", 00:21:07.004 "params": { 00:21:07.004 "bdev_io_pool_size": 65535, 00:21:07.004 "bdev_io_cache_size": 256, 00:21:07.004 "bdev_auto_examine": true, 00:21:07.004 "iobuf_small_cache_size": 128, 00:21:07.004 "iobuf_large_cache_size": 16 00:21:07.004 } 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "method": "bdev_raid_set_options", 00:21:07.004 "params": { 00:21:07.004 "process_window_size_kb": 1024, 00:21:07.004 "process_max_bandwidth_mb_sec": 0 00:21:07.004 } 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "method": "bdev_iscsi_set_options", 00:21:07.004 "params": { 00:21:07.004 "timeout_sec": 30 00:21:07.004 } 00:21:07.004 }, 00:21:07.004 { 00:21:07.004 "method": "bdev_nvme_set_options", 00:21:07.004 "params": { 00:21:07.004 "action_on_timeout": "none", 00:21:07.004 "timeout_us": 0, 00:21:07.004 "timeout_admin_us": 0, 00:21:07.004 "keep_alive_timeout_ms": 10000, 00:21:07.004 "arbitration_burst": 0, 00:21:07.005 "low_priority_weight": 0, 00:21:07.005 "medium_priority_weight": 0, 00:21:07.005 "high_priority_weight": 0, 00:21:07.005 "nvme_adminq_poll_period_us": 10000, 00:21:07.005 "nvme_ioq_poll_period_us": 0, 00:21:07.005 "io_queue_requests": 0, 00:21:07.005 "delay_cmd_submit": true, 00:21:07.005 "transport_retry_count": 4, 00:21:07.005 "bdev_retry_count": 3, 00:21:07.005 "transport_ack_timeout": 0, 00:21:07.005 "ctrlr_loss_timeout_sec": 0, 00:21:07.005 "reconnect_delay_sec": 0, 00:21:07.005 "fast_io_fail_timeout_sec": 0, 00:21:07.005 "disable_auto_failback": false, 00:21:07.005 "generate_uuids": false, 00:21:07.005 "transport_tos": 0, 00:21:07.005 "nvme_error_stat": false, 00:21:07.005 "rdma_srq_size": 0, 00:21:07.005 "io_path_stat": false, 00:21:07.005 "allow_accel_sequence": false, 00:21:07.005 "rdma_max_cq_size": 0, 00:21:07.005 "rdma_cm_event_timeout_ms": 0, 00:21:07.005 "dhchap_digests": [ 00:21:07.005 "sha256", 00:21:07.005 "sha384", 00:21:07.005 "sha512" 00:21:07.005 ], 00:21:07.005 "dhchap_dhgroups": [ 00:21:07.005 "null", 00:21:07.005 "ffdhe2048", 00:21:07.005 "ffdhe3072", 00:21:07.005 "ffdhe4096", 00:21:07.005 "ffdhe6144", 00:21:07.005 "ffdhe8192" 00:21:07.005 ] 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "bdev_nvme_set_hotplug", 00:21:07.005 "params": { 00:21:07.005 "period_us": 100000, 00:21:07.005 "enable": false 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "bdev_malloc_create", 00:21:07.005 "params": { 00:21:07.005 "name": "malloc0", 00:21:07.005 "num_blocks": 8192, 00:21:07.005 "block_size": 4096, 00:21:07.005 "physical_block_size": 4096, 00:21:07.005 "uuid": "f49aecfc-cf92-4989-b314-94b709374d85", 00:21:07.005 "optimal_io_boundary": 0, 00:21:07.005 "md_size": 0, 00:21:07.005 "dif_type": 0, 00:21:07.005 "dif_is_head_of_md": false, 00:21:07.005 "dif_pi_format": 0 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "bdev_wait_for_examine" 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "nbd", 00:21:07.005 "config": [] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "scheduler", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "framework_set_scheduler", 00:21:07.005 "params": { 00:21:07.005 "name": "static" 00:21:07.005 } 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "nvmf", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "nvmf_set_config", 00:21:07.005 "params": { 00:21:07.005 "discovery_filter": "match_any", 00:21:07.005 "admin_cmd_passthru": { 00:21:07.005 "identify_ctrlr": false 00:21:07.005 }, 00:21:07.005 "dhchap_digests": [ 00:21:07.005 "sha256", 00:21:07.005 "sha384", 00:21:07.005 "sha512" 00:21:07.005 ], 00:21:07.005 "dhchap_dhgroups": [ 00:21:07.005 "null", 00:21:07.005 "ffdhe2048", 00:21:07.005 "ffdhe3072", 00:21:07.005 "ffdhe4096", 00:21:07.005 "ffdhe6144", 00:21:07.005 "ffdhe8192" 00:21:07.005 ] 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_set_max_subsystems", 00:21:07.005 "params": { 00:21:07.005 "max_subsystems": 1024 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_set_crdt", 00:21:07.005 "params": { 00:21:07.005 "crdt1": 0, 00:21:07.005 "crdt2": 0, 00:21:07.005 "crdt3": 0 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_create_transport", 00:21:07.005 "params": { 00:21:07.005 "trtype": "TCP", 00:21:07.005 "max_queue_depth": 128, 00:21:07.005 "max_io_qpairs_per_ctrlr": 127, 00:21:07.005 "in_capsule_data_size": 4096, 00:21:07.005 "max_io_size": 131072, 00:21:07.005 "io_unit_size": 131072, 00:21:07.005 "max_aq_depth": 128, 00:21:07.005 "num_shared_buffers": 511, 00:21:07.005 "buf_cache_size": 4294967295, 00:21:07.005 "dif_insert_or_strip": false, 00:21:07.005 "zcopy": false, 00:21:07.005 "c2h_success": false, 00:21:07.005 "sock_priority": 0, 00:21:07.005 "abort_timeout_sec": 1, 00:21:07.005 "ack_timeout": 0, 00:21:07.005 "data_wr_pool_size": 0 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_create_subsystem", 00:21:07.005 "params": { 00:21:07.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.005 "allow_any_host": false, 00:21:07.005 "serial_number": "00000000000000000000", 00:21:07.005 "model_number": "SPDK bdev Controller", 00:21:07.005 "max_namespaces": 32, 00:21:07.005 "min_cntlid": 1, 00:21:07.005 "max_cntlid": 65519, 00:21:07.005 "ana_reporting": false 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_subsystem_add_host", 00:21:07.005 "params": { 00:21:07.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.005 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.005 "psk": "key0" 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_subsystem_add_ns", 00:21:07.005 "params": { 00:21:07.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.005 "namespace": { 00:21:07.005 "nsid": 1, 00:21:07.005 "bdev_name": "malloc0", 00:21:07.005 "nguid": "F49AECFCCF924989B31494B709374D85", 00:21:07.005 "uuid": "f49aecfc-cf92-4989-b314-94b709374d85", 00:21:07.005 "no_auto_visible": false 00:21:07.005 } 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "nvmf_subsystem_add_listener", 00:21:07.005 "params": { 00:21:07.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.005 "listen_address": { 00:21:07.005 "trtype": "TCP", 00:21:07.005 "adrfam": "IPv4", 00:21:07.005 "traddr": "10.0.0.2", 00:21:07.005 "trsvcid": "4420" 00:21:07.005 }, 00:21:07.005 "secure_channel": false, 00:21:07.005 "sock_impl": "ssl" 00:21:07.005 } 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }' 00:21:07.005 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:07.005 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:07.005 "subsystems": [ 00:21:07.005 { 00:21:07.005 "subsystem": "keyring", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "keyring_file_add_key", 00:21:07.005 "params": { 00:21:07.005 "name": "key0", 00:21:07.005 "path": "/tmp/tmp.CbCNiDTonl" 00:21:07.005 } 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "iobuf", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "iobuf_set_options", 00:21:07.005 "params": { 00:21:07.005 "small_pool_count": 8192, 00:21:07.005 "large_pool_count": 1024, 00:21:07.005 "small_bufsize": 8192, 00:21:07.005 "large_bufsize": 135168, 00:21:07.005 "enable_numa": false 00:21:07.005 } 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "sock", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "sock_set_default_impl", 00:21:07.005 "params": { 00:21:07.005 "impl_name": "posix" 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "sock_impl_set_options", 00:21:07.005 "params": { 00:21:07.005 "impl_name": "ssl", 00:21:07.005 "recv_buf_size": 4096, 00:21:07.005 "send_buf_size": 4096, 00:21:07.005 "enable_recv_pipe": true, 00:21:07.005 "enable_quickack": false, 00:21:07.005 "enable_placement_id": 0, 00:21:07.005 "enable_zerocopy_send_server": true, 00:21:07.005 "enable_zerocopy_send_client": false, 00:21:07.005 "zerocopy_threshold": 0, 00:21:07.005 "tls_version": 0, 00:21:07.005 "enable_ktls": false 00:21:07.005 } 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "method": "sock_impl_set_options", 00:21:07.005 "params": { 00:21:07.005 "impl_name": "posix", 00:21:07.005 "recv_buf_size": 2097152, 00:21:07.005 "send_buf_size": 2097152, 00:21:07.005 "enable_recv_pipe": true, 00:21:07.005 "enable_quickack": false, 00:21:07.005 "enable_placement_id": 0, 00:21:07.005 "enable_zerocopy_send_server": true, 00:21:07.005 "enable_zerocopy_send_client": false, 00:21:07.005 "zerocopy_threshold": 0, 00:21:07.005 "tls_version": 0, 00:21:07.005 "enable_ktls": false 00:21:07.005 } 00:21:07.005 } 00:21:07.005 ] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "vmd", 00:21:07.005 "config": [] 00:21:07.005 }, 00:21:07.005 { 00:21:07.005 "subsystem": "accel", 00:21:07.005 "config": [ 00:21:07.005 { 00:21:07.005 "method": "accel_set_options", 00:21:07.005 "params": { 00:21:07.005 "small_cache_size": 128, 00:21:07.005 "large_cache_size": 16, 00:21:07.005 "task_count": 2048, 00:21:07.006 "sequence_count": 2048, 00:21:07.006 "buf_count": 2048 00:21:07.006 } 00:21:07.006 } 00:21:07.006 ] 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "subsystem": "bdev", 00:21:07.006 "config": [ 00:21:07.006 { 00:21:07.006 "method": "bdev_set_options", 00:21:07.006 "params": { 00:21:07.006 "bdev_io_pool_size": 65535, 00:21:07.006 "bdev_io_cache_size": 256, 00:21:07.006 "bdev_auto_examine": true, 00:21:07.006 "iobuf_small_cache_size": 128, 00:21:07.006 "iobuf_large_cache_size": 16 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_raid_set_options", 00:21:07.006 "params": { 00:21:07.006 "process_window_size_kb": 1024, 00:21:07.006 "process_max_bandwidth_mb_sec": 0 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_iscsi_set_options", 00:21:07.006 "params": { 00:21:07.006 "timeout_sec": 30 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_nvme_set_options", 00:21:07.006 "params": { 00:21:07.006 "action_on_timeout": "none", 00:21:07.006 "timeout_us": 0, 00:21:07.006 "timeout_admin_us": 0, 00:21:07.006 "keep_alive_timeout_ms": 10000, 00:21:07.006 "arbitration_burst": 0, 00:21:07.006 "low_priority_weight": 0, 00:21:07.006 "medium_priority_weight": 0, 00:21:07.006 "high_priority_weight": 0, 00:21:07.006 "nvme_adminq_poll_period_us": 10000, 00:21:07.006 "nvme_ioq_poll_period_us": 0, 00:21:07.006 "io_queue_requests": 512, 00:21:07.006 "delay_cmd_submit": true, 00:21:07.006 "transport_retry_count": 4, 00:21:07.006 "bdev_retry_count": 3, 00:21:07.006 "transport_ack_timeout": 0, 00:21:07.006 "ctrlr_loss_timeout_sec": 0, 00:21:07.006 "reconnect_delay_sec": 0, 00:21:07.006 "fast_io_fail_timeout_sec": 0, 00:21:07.006 "disable_auto_failback": false, 00:21:07.006 "generate_uuids": false, 00:21:07.006 "transport_tos": 0, 00:21:07.006 "nvme_error_stat": false, 00:21:07.006 "rdma_srq_size": 0, 00:21:07.006 "io_path_stat": false, 00:21:07.006 "allow_accel_sequence": false, 00:21:07.006 "rdma_max_cq_size": 0, 00:21:07.006 "rdma_cm_event_timeout_ms": 0, 00:21:07.006 "dhchap_digests": [ 00:21:07.006 "sha256", 00:21:07.006 "sha384", 00:21:07.006 "sha512" 00:21:07.006 ], 00:21:07.006 "dhchap_dhgroups": [ 00:21:07.006 "null", 00:21:07.006 "ffdhe2048", 00:21:07.006 "ffdhe3072", 00:21:07.006 "ffdhe4096", 00:21:07.006 "ffdhe6144", 00:21:07.006 "ffdhe8192" 00:21:07.006 ] 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_nvme_attach_controller", 00:21:07.006 "params": { 00:21:07.006 "name": "nvme0", 00:21:07.006 "trtype": "TCP", 00:21:07.006 "adrfam": "IPv4", 00:21:07.006 "traddr": "10.0.0.2", 00:21:07.006 "trsvcid": "4420", 00:21:07.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.006 "prchk_reftag": false, 00:21:07.006 "prchk_guard": false, 00:21:07.006 "ctrlr_loss_timeout_sec": 0, 00:21:07.006 "reconnect_delay_sec": 0, 00:21:07.006 "fast_io_fail_timeout_sec": 0, 00:21:07.006 "psk": "key0", 00:21:07.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.006 "hdgst": false, 00:21:07.006 "ddgst": false, 00:21:07.006 "multipath": "multipath" 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_nvme_set_hotplug", 00:21:07.006 "params": { 00:21:07.006 "period_us": 100000, 00:21:07.006 "enable": false 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_enable_histogram", 00:21:07.006 "params": { 00:21:07.006 "name": "nvme0n1", 00:21:07.006 "enable": true 00:21:07.006 } 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "method": "bdev_wait_for_examine" 00:21:07.006 } 00:21:07.006 ] 00:21:07.006 }, 00:21:07.006 { 00:21:07.006 "subsystem": "nbd", 00:21:07.006 "config": [] 00:21:07.006 } 00:21:07.006 ] 00:21:07.006 }' 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2159775 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2159775 ']' 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2159775 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.006 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159775 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159775' 00:21:07.266 killing process with pid 2159775 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2159775 00:21:07.266 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.266 00:21:07.266 Latency(us) 00:21:07.266 [2024-12-06T17:33:02.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.266 [2024-12-06T17:33:02.050Z] =================================================================================================================== 00:21:07.266 [2024-12-06T17:33:02.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2159775 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2159497 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2159497 ']' 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2159497 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.266 18:33:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2159497 00:21:07.266 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.266 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.266 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2159497' 00:21:07.266 killing process with pid 2159497 00:21:07.266 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2159497 00:21:07.266 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2159497 00:21:07.527 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:07.527 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.527 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.527 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.527 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:07.527 "subsystems": [ 00:21:07.527 { 00:21:07.527 "subsystem": "keyring", 00:21:07.527 "config": [ 00:21:07.527 { 00:21:07.527 "method": "keyring_file_add_key", 00:21:07.527 "params": { 00:21:07.527 "name": "key0", 00:21:07.527 "path": "/tmp/tmp.CbCNiDTonl" 00:21:07.527 } 00:21:07.527 } 00:21:07.527 ] 00:21:07.527 }, 00:21:07.527 { 00:21:07.527 "subsystem": "iobuf", 00:21:07.527 "config": [ 00:21:07.527 { 00:21:07.527 "method": "iobuf_set_options", 00:21:07.527 "params": { 00:21:07.527 "small_pool_count": 8192, 00:21:07.527 "large_pool_count": 1024, 00:21:07.527 "small_bufsize": 8192, 00:21:07.527 "large_bufsize": 135168, 00:21:07.527 "enable_numa": false 00:21:07.527 } 00:21:07.527 } 00:21:07.527 ] 00:21:07.527 }, 00:21:07.527 { 00:21:07.527 "subsystem": "sock", 00:21:07.527 "config": [ 00:21:07.527 { 00:21:07.527 "method": "sock_set_default_impl", 00:21:07.527 "params": { 00:21:07.527 "impl_name": "posix" 00:21:07.527 } 00:21:07.527 }, 00:21:07.527 { 00:21:07.527 "method": "sock_impl_set_options", 00:21:07.527 "params": { 00:21:07.527 "impl_name": "ssl", 00:21:07.527 "recv_buf_size": 4096, 00:21:07.527 "send_buf_size": 4096, 00:21:07.527 "enable_recv_pipe": true, 00:21:07.527 "enable_quickack": false, 00:21:07.528 "enable_placement_id": 0, 00:21:07.528 "enable_zerocopy_send_server": true, 00:21:07.528 "enable_zerocopy_send_client": false, 00:21:07.528 "zerocopy_threshold": 0, 00:21:07.528 "tls_version": 0, 00:21:07.528 "enable_ktls": false 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "sock_impl_set_options", 00:21:07.528 "params": { 00:21:07.528 "impl_name": "posix", 00:21:07.528 "recv_buf_size": 2097152, 00:21:07.528 "send_buf_size": 2097152, 00:21:07.528 "enable_recv_pipe": true, 00:21:07.528 "enable_quickack": false, 00:21:07.528 "enable_placement_id": 0, 00:21:07.528 "enable_zerocopy_send_server": true, 00:21:07.528 "enable_zerocopy_send_client": false, 00:21:07.528 "zerocopy_threshold": 0, 00:21:07.528 "tls_version": 0, 00:21:07.528 "enable_ktls": false 00:21:07.528 } 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "vmd", 00:21:07.528 "config": [] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "accel", 00:21:07.528 "config": [ 00:21:07.528 { 00:21:07.528 "method": "accel_set_options", 00:21:07.528 "params": { 00:21:07.528 "small_cache_size": 128, 00:21:07.528 "large_cache_size": 16, 00:21:07.528 "task_count": 2048, 00:21:07.528 "sequence_count": 2048, 00:21:07.528 "buf_count": 2048 00:21:07.528 } 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "bdev", 00:21:07.528 "config": [ 00:21:07.528 { 00:21:07.528 "method": "bdev_set_options", 00:21:07.528 "params": { 00:21:07.528 "bdev_io_pool_size": 65535, 00:21:07.528 "bdev_io_cache_size": 256, 00:21:07.528 "bdev_auto_examine": true, 00:21:07.528 "iobuf_small_cache_size": 128, 00:21:07.528 "iobuf_large_cache_size": 16 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_raid_set_options", 00:21:07.528 "params": { 00:21:07.528 "process_window_size_kb": 1024, 00:21:07.528 "process_max_bandwidth_mb_sec": 0 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_iscsi_set_options", 00:21:07.528 "params": { 00:21:07.528 "timeout_sec": 30 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_nvme_set_options", 00:21:07.528 "params": { 00:21:07.528 "action_on_timeout": "none", 00:21:07.528 "timeout_us": 0, 00:21:07.528 "timeout_admin_us": 0, 00:21:07.528 "keep_alive_timeout_ms": 10000, 00:21:07.528 "arbitration_burst": 0, 00:21:07.528 "low_priority_weight": 0, 00:21:07.528 "medium_priority_weight": 0, 00:21:07.528 "high_priority_weight": 0, 00:21:07.528 "nvme_adminq_poll_period_us": 10000, 00:21:07.528 "nvme_ioq_poll_period_us": 0, 00:21:07.528 "io_queue_requests": 0, 00:21:07.528 "delay_cmd_submit": true, 00:21:07.528 "transport_retry_count": 4, 00:21:07.528 "bdev_retry_count": 3, 00:21:07.528 "transport_ack_timeout": 0, 00:21:07.528 "ctrlr_loss_timeout_sec": 0, 00:21:07.528 "reconnect_delay_sec": 0, 00:21:07.528 "fast_io_fail_timeout_sec": 0, 00:21:07.528 "disable_auto_failback": false, 00:21:07.528 "generate_uuids": false, 00:21:07.528 "transport_tos": 0, 00:21:07.528 "nvme_error_stat": false, 00:21:07.528 "rdma_srq_size": 0, 00:21:07.528 "io_path_stat": false, 00:21:07.528 "allow_accel_sequence": false, 00:21:07.528 "rdma_max_cq_size": 0, 00:21:07.528 "rdma_cm_event_timeout_ms": 0, 00:21:07.528 "dhchap_digests": [ 00:21:07.528 "sha256", 00:21:07.528 "sha384", 00:21:07.528 "sha512" 00:21:07.528 ], 00:21:07.528 "dhchap_dhgroups": [ 00:21:07.528 "null", 00:21:07.528 "ffdhe2048", 00:21:07.528 "ffdhe3072", 00:21:07.528 "ffdhe4096", 00:21:07.528 "ffdhe6144", 00:21:07.528 "ffdhe8192" 00:21:07.528 ] 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_nvme_set_hotplug", 00:21:07.528 "params": { 00:21:07.528 "period_us": 100000, 00:21:07.528 "enable": false 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_malloc_create", 00:21:07.528 "params": { 00:21:07.528 "name": "malloc0", 00:21:07.528 "num_blocks": 8192, 00:21:07.528 "block_size": 4096, 00:21:07.528 "physical_block_size": 4096, 00:21:07.528 "uuid": "f49aecfc-cf92-4989-b314-94b709374d85", 00:21:07.528 "optimal_io_boundary": 0, 00:21:07.528 "md_size": 0, 00:21:07.528 "dif_type": 0, 00:21:07.528 "dif_is_head_of_md": false, 00:21:07.528 "dif_pi_format": 0 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "bdev_wait_for_examine" 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "nbd", 00:21:07.528 "config": [] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "scheduler", 00:21:07.528 "config": [ 00:21:07.528 { 00:21:07.528 "method": "framework_set_scheduler", 00:21:07.528 "params": { 00:21:07.528 "name": "static" 00:21:07.528 } 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "subsystem": "nvmf", 00:21:07.528 "config": [ 00:21:07.528 { 00:21:07.528 "method": "nvmf_set_config", 00:21:07.528 "params": { 00:21:07.528 "discovery_filter": "match_any", 00:21:07.528 "admin_cmd_passthru": { 00:21:07.528 "identify_ctrlr": false 00:21:07.528 }, 00:21:07.528 "dhchap_digests": [ 00:21:07.528 "sha256", 00:21:07.528 "sha384", 00:21:07.528 "sha512" 00:21:07.528 ], 00:21:07.528 "dhchap_dhgroups": [ 00:21:07.528 "null", 00:21:07.528 "ffdhe2048", 00:21:07.528 "ffdhe3072", 00:21:07.528 "ffdhe4096", 00:21:07.528 "ffdhe6144", 00:21:07.528 "ffdhe8192" 00:21:07.528 ] 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_set_max_subsystems", 00:21:07.528 "params": { 00:21:07.528 "max_subsystems": 1024 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_set_crdt", 00:21:07.528 "params": { 00:21:07.528 "crdt1": 0, 00:21:07.528 "crdt2": 0, 00:21:07.528 "crdt3": 0 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_create_transport", 00:21:07.528 "params": { 00:21:07.528 "trtype": "TCP", 00:21:07.528 "max_queue_depth": 128, 00:21:07.528 "max_io_qpairs_per_ctrlr": 127, 00:21:07.528 "in_capsule_data_size": 4096, 00:21:07.528 "max_io_size": 131072, 00:21:07.528 "io_unit_size": 131072, 00:21:07.528 "max_aq_depth": 128, 00:21:07.528 "num_shared_buffers": 511, 00:21:07.528 "buf_cache_size": 4294967295, 00:21:07.528 "dif_insert_or_strip": false, 00:21:07.528 "zcopy": false, 00:21:07.528 "c2h_success": false, 00:21:07.528 "sock_priority": 0, 00:21:07.528 "abort_timeout_sec": 1, 00:21:07.528 "ack_timeout": 0, 00:21:07.528 "data_wr_pool_size": 0 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_create_subsystem", 00:21:07.528 "params": { 00:21:07.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.528 "allow_any_host": false, 00:21:07.528 "serial_number": "00000000000000000000", 00:21:07.528 "model_number": "SPDK bdev Controller", 00:21:07.528 "max_namespaces": 32, 00:21:07.528 "min_cntlid": 1, 00:21:07.528 "max_cntlid": 65519, 00:21:07.528 "ana_reporting": false 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_subsystem_add_host", 00:21:07.528 "params": { 00:21:07.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.528 "host": "nqn.2016-06.io.spdk:host1", 00:21:07.528 "psk": "key0" 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_subsystem_add_ns", 00:21:07.528 "params": { 00:21:07.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.528 "namespace": { 00:21:07.528 "nsid": 1, 00:21:07.528 "bdev_name": "malloc0", 00:21:07.528 "nguid": "F49AECFCCF924989B31494B709374D85", 00:21:07.528 "uuid": "f49aecfc-cf92-4989-b314-94b709374d85", 00:21:07.528 "no_auto_visible": false 00:21:07.528 } 00:21:07.528 } 00:21:07.528 }, 00:21:07.528 { 00:21:07.528 "method": "nvmf_subsystem_add_listener", 00:21:07.528 "params": { 00:21:07.528 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.528 "listen_address": { 00:21:07.528 "trtype": "TCP", 00:21:07.528 "adrfam": "IPv4", 00:21:07.528 "traddr": "10.0.0.2", 00:21:07.528 "trsvcid": "4420" 00:21:07.528 }, 00:21:07.528 "secure_channel": false, 00:21:07.528 "sock_impl": "ssl" 00:21:07.528 } 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 } 00:21:07.528 ] 00:21:07.528 }' 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2160390 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2160390 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2160390 ']' 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.528 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.529 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.529 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.529 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.529 [2024-12-06 18:33:02.178068] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:07.529 [2024-12-06 18:33:02.178126] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.529 [2024-12-06 18:33:02.266120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.529 [2024-12-06 18:33:02.295508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.529 [2024-12-06 18:33:02.295537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.529 [2024-12-06 18:33:02.295542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.529 [2024-12-06 18:33:02.295547] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.529 [2024-12-06 18:33:02.295554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.529 [2024-12-06 18:33:02.296039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.788 [2024-12-06 18:33:02.490298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.788 [2024-12-06 18:33:02.522332] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.788 [2024-12-06 18:33:02.522526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.357 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.357 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:08.357 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.357 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.357 18:33:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2160668 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2160668 /var/tmp/bdevperf.sock 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2160668 ']' 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.357 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:08.358 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.358 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.358 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:08.358 "subsystems": [ 00:21:08.358 { 00:21:08.358 "subsystem": "keyring", 00:21:08.358 "config": [ 00:21:08.358 { 00:21:08.358 "method": "keyring_file_add_key", 00:21:08.358 "params": { 00:21:08.358 "name": "key0", 00:21:08.358 "path": "/tmp/tmp.CbCNiDTonl" 00:21:08.358 } 00:21:08.358 } 00:21:08.358 ] 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "subsystem": "iobuf", 00:21:08.358 "config": [ 00:21:08.358 { 00:21:08.358 "method": "iobuf_set_options", 00:21:08.358 "params": { 00:21:08.358 "small_pool_count": 8192, 00:21:08.358 "large_pool_count": 1024, 00:21:08.358 "small_bufsize": 8192, 00:21:08.358 "large_bufsize": 135168, 00:21:08.358 "enable_numa": false 00:21:08.358 } 00:21:08.358 } 00:21:08.358 ] 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "subsystem": "sock", 00:21:08.358 "config": [ 00:21:08.358 { 00:21:08.358 "method": "sock_set_default_impl", 00:21:08.358 "params": { 00:21:08.358 "impl_name": "posix" 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "sock_impl_set_options", 00:21:08.358 "params": { 00:21:08.358 "impl_name": "ssl", 00:21:08.358 "recv_buf_size": 4096, 00:21:08.358 "send_buf_size": 4096, 00:21:08.358 "enable_recv_pipe": true, 00:21:08.358 "enable_quickack": false, 00:21:08.358 "enable_placement_id": 0, 00:21:08.358 "enable_zerocopy_send_server": true, 00:21:08.358 "enable_zerocopy_send_client": false, 00:21:08.358 "zerocopy_threshold": 0, 00:21:08.358 "tls_version": 0, 00:21:08.358 "enable_ktls": false 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "sock_impl_set_options", 00:21:08.358 "params": { 00:21:08.358 "impl_name": "posix", 00:21:08.358 "recv_buf_size": 2097152, 00:21:08.358 "send_buf_size": 2097152, 00:21:08.358 "enable_recv_pipe": true, 00:21:08.358 "enable_quickack": false, 00:21:08.358 "enable_placement_id": 0, 00:21:08.358 "enable_zerocopy_send_server": true, 00:21:08.358 "enable_zerocopy_send_client": false, 00:21:08.358 "zerocopy_threshold": 0, 00:21:08.358 "tls_version": 0, 00:21:08.358 "enable_ktls": false 00:21:08.358 } 00:21:08.358 } 00:21:08.358 ] 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "subsystem": "vmd", 00:21:08.358 "config": [] 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "subsystem": "accel", 00:21:08.358 "config": [ 00:21:08.358 { 00:21:08.358 "method": "accel_set_options", 00:21:08.358 "params": { 00:21:08.358 "small_cache_size": 128, 00:21:08.358 "large_cache_size": 16, 00:21:08.358 "task_count": 2048, 00:21:08.358 "sequence_count": 2048, 00:21:08.358 "buf_count": 2048 00:21:08.358 } 00:21:08.358 } 00:21:08.358 ] 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "subsystem": "bdev", 00:21:08.358 "config": [ 00:21:08.358 { 00:21:08.358 "method": "bdev_set_options", 00:21:08.358 "params": { 00:21:08.358 "bdev_io_pool_size": 65535, 00:21:08.358 "bdev_io_cache_size": 256, 00:21:08.358 "bdev_auto_examine": true, 00:21:08.358 "iobuf_small_cache_size": 128, 00:21:08.358 "iobuf_large_cache_size": 16 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "bdev_raid_set_options", 00:21:08.358 "params": { 00:21:08.358 "process_window_size_kb": 1024, 00:21:08.358 "process_max_bandwidth_mb_sec": 0 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "bdev_iscsi_set_options", 00:21:08.358 "params": { 00:21:08.358 "timeout_sec": 30 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "bdev_nvme_set_options", 00:21:08.358 "params": { 00:21:08.358 "action_on_timeout": "none", 00:21:08.358 "timeout_us": 0, 00:21:08.358 "timeout_admin_us": 0, 00:21:08.358 "keep_alive_timeout_ms": 10000, 00:21:08.358 "arbitration_burst": 0, 00:21:08.358 "low_priority_weight": 0, 00:21:08.358 "medium_priority_weight": 0, 00:21:08.358 "high_priority_weight": 0, 00:21:08.358 "nvme_adminq_poll_period_us": 10000, 00:21:08.358 "nvme_ioq_poll_period_us": 0, 00:21:08.358 "io_queue_requests": 512, 00:21:08.358 "delay_cmd_submit": true, 00:21:08.358 "transport_retry_count": 4, 00:21:08.358 "bdev_retry_count": 3, 00:21:08.358 "transport_ack_timeout": 0, 00:21:08.358 "ctrlr_loss_timeout_sec": 0, 00:21:08.358 "reconnect_delay_sec": 0, 00:21:08.358 "fast_io_fail_timeout_sec": 0, 00:21:08.358 "disable_auto_failback": false, 00:21:08.358 "generate_uuids": false, 00:21:08.358 "transport_tos": 0, 00:21:08.358 "nvme_error_stat": false, 00:21:08.358 "rdma_srq_size": 0, 00:21:08.358 "io_path_stat": false, 00:21:08.358 "allow_accel_sequence": false, 00:21:08.358 "rdma_max_cq_size": 0, 00:21:08.358 "rdma_cm_event_timeout_ms": 0, 00:21:08.358 "dhchap_digests": [ 00:21:08.358 "sha256", 00:21:08.358 "sha384", 00:21:08.358 "sha512" 00:21:08.358 ], 00:21:08.358 "dhchap_dhgroups": [ 00:21:08.358 "null", 00:21:08.358 "ffdhe2048", 00:21:08.358 "ffdhe3072", 00:21:08.358 "ffdhe4096", 00:21:08.358 "ffdhe6144", 00:21:08.358 "ffdhe8192" 00:21:08.358 ] 00:21:08.358 } 00:21:08.358 }, 00:21:08.358 { 00:21:08.358 "method": "bdev_nvme_attach_controller", 00:21:08.358 "params": { 00:21:08.358 "name": "nvme0", 00:21:08.358 "trtype": "TCP", 00:21:08.358 "adrfam": "IPv4", 00:21:08.358 "traddr": "10.0.0.2", 00:21:08.359 "trsvcid": "4420", 00:21:08.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.359 "prchk_reftag": false, 00:21:08.359 "prchk_guard": false, 00:21:08.359 "ctrlr_loss_timeout_sec": 0, 00:21:08.359 "reconnect_delay_sec": 0, 00:21:08.359 "fast_io_fail_timeout_sec": 0, 00:21:08.359 "psk": "key0", 00:21:08.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.359 "hdgst": false, 00:21:08.359 "ddgst": false, 00:21:08.359 "multipath": "multipath" 00:21:08.359 } 00:21:08.359 }, 00:21:08.359 { 00:21:08.359 "method": "bdev_nvme_set_hotplug", 00:21:08.359 "params": { 00:21:08.359 "period_us": 100000, 00:21:08.359 "enable": false 00:21:08.359 } 00:21:08.359 }, 00:21:08.359 { 00:21:08.359 "method": "bdev_enable_histogram", 00:21:08.359 "params": { 00:21:08.359 "name": "nvme0n1", 00:21:08.359 "enable": true 00:21:08.359 } 00:21:08.359 }, 00:21:08.359 { 00:21:08.359 "method": "bdev_wait_for_examine" 00:21:08.359 } 00:21:08.359 ] 00:21:08.359 }, 00:21:08.359 { 00:21:08.359 "subsystem": "nbd", 00:21:08.359 "config": [] 00:21:08.359 } 00:21:08.359 ] 00:21:08.359 }' 00:21:08.359 [2024-12-06 18:33:03.051354] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:08.359 [2024-12-06 18:33:03.051406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160668 ] 00:21:08.359 [2024-12-06 18:33:03.136100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.620 [2024-12-06 18:33:03.165860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.620 [2024-12-06 18:33:03.301823] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:09.193 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:09.193 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:09.193 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:09.193 18:33:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:09.454 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.454 18:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:09.454 Running I/O for 1 seconds... 00:21:10.477 5499.00 IOPS, 21.48 MiB/s 00:21:10.477 Latency(us) 00:21:10.477 [2024-12-06T17:33:05.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.477 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:10.477 Verification LBA range: start 0x0 length 0x2000 00:21:10.477 nvme0n1 : 1.01 5564.27 21.74 0.00 0.00 22863.56 4642.13 55705.60 00:21:10.477 [2024-12-06T17:33:05.261Z] =================================================================================================================== 00:21:10.477 [2024-12-06T17:33:05.261Z] Total : 5564.27 21.74 0.00 0.00 22863.56 4642.13 55705.60 00:21:10.477 { 00:21:10.477 "results": [ 00:21:10.477 { 00:21:10.477 "job": "nvme0n1", 00:21:10.477 "core_mask": "0x2", 00:21:10.477 "workload": "verify", 00:21:10.477 "status": "finished", 00:21:10.477 "verify_range": { 00:21:10.477 "start": 0, 00:21:10.477 "length": 8192 00:21:10.477 }, 00:21:10.477 "queue_depth": 128, 00:21:10.477 "io_size": 4096, 00:21:10.477 "runtime": 1.011274, 00:21:10.477 "iops": 5564.268437634113, 00:21:10.477 "mibps": 21.735423584508254, 00:21:10.477 "io_failed": 0, 00:21:10.477 "io_timeout": 0, 00:21:10.477 "avg_latency_us": 22863.55850956697, 00:21:10.477 "min_latency_us": 4642.133333333333, 00:21:10.477 "max_latency_us": 55705.6 00:21:10.477 } 00:21:10.477 ], 00:21:10.477 "core_count": 1 00:21:10.477 } 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:10.477 nvmf_trace.0 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2160668 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2160668 ']' 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2160668 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.477 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160668 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160668' 00:21:10.755 killing process with pid 2160668 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2160668 00:21:10.755 Received shutdown signal, test time was about 1.000000 seconds 00:21:10.755 00:21:10.755 Latency(us) 00:21:10.755 [2024-12-06T17:33:05.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.755 [2024-12-06T17:33:05.539Z] =================================================================================================================== 00:21:10.755 [2024-12-06T17:33:05.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2160668 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.755 rmmod nvme_tcp 00:21:10.755 rmmod nvme_fabrics 00:21:10.755 rmmod nvme_keyring 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2160390 ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2160390 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2160390 ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2160390 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160390 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160390' 00:21:10.755 killing process with pid 2160390 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2160390 00:21:10.755 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2160390 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.023 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.024 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.024 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.024 18:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.932 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:12.932 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MczH2BUTmX /tmp/tmp.hBqgLoJ2i6 /tmp/tmp.CbCNiDTonl 00:21:12.932 00:21:12.932 real 1m27.957s 00:21:12.932 user 2m19.655s 00:21:12.932 sys 0m26.630s 00:21:12.932 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.932 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.932 ************************************ 00:21:12.932 END TEST nvmf_tls 00:21:12.932 ************************************ 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.192 ************************************ 00:21:13.192 START TEST nvmf_fips 00:21:13.192 ************************************ 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:13.192 * Looking for test storage... 00:21:13.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.192 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.193 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.454 --rc genhtml_branch_coverage=1 00:21:13.454 --rc genhtml_function_coverage=1 00:21:13.454 --rc genhtml_legend=1 00:21:13.454 --rc geninfo_all_blocks=1 00:21:13.454 --rc geninfo_unexecuted_blocks=1 00:21:13.454 00:21:13.454 ' 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.454 --rc genhtml_branch_coverage=1 00:21:13.454 --rc genhtml_function_coverage=1 00:21:13.454 --rc genhtml_legend=1 00:21:13.454 --rc geninfo_all_blocks=1 00:21:13.454 --rc geninfo_unexecuted_blocks=1 00:21:13.454 00:21:13.454 ' 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.454 --rc genhtml_branch_coverage=1 00:21:13.454 --rc genhtml_function_coverage=1 00:21:13.454 --rc genhtml_legend=1 00:21:13.454 --rc geninfo_all_blocks=1 00:21:13.454 --rc geninfo_unexecuted_blocks=1 00:21:13.454 00:21:13.454 ' 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.454 --rc genhtml_branch_coverage=1 00:21:13.454 --rc genhtml_function_coverage=1 00:21:13.454 --rc genhtml_legend=1 00:21:13.454 --rc geninfo_all_blocks=1 00:21:13.454 --rc geninfo_unexecuted_blocks=1 00:21:13.454 00:21:13.454 ' 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.454 18:33:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.454 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:13.455 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:13.455 Error setting digest 00:21:13.455 4062E7B9197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:13.455 4062E7B9197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.456 18:33:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:21.600 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:21.600 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:21.600 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:21.600 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:21:21.600 00:21:21.600 --- 10.0.0.2 ping statistics --- 00:21:21.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.600 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:21.600 00:21:21.600 --- 10.0.0.1 ping statistics --- 00:21:21.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.600 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2165837 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2165837 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2165837 ']' 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.600 18:33:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.600 [2024-12-06 18:33:15.798527] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:21.600 [2024-12-06 18:33:15.798602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.600 [2024-12-06 18:33:15.897216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.600 [2024-12-06 18:33:15.947299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.600 [2024-12-06 18:33:15.947349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.600 [2024-12-06 18:33:15.947358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.600 [2024-12-06 18:33:15.947370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.600 [2024-12-06 18:33:15.947377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.600 [2024-12-06 18:33:15.948118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:21.860 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BkM 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BkM 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BkM 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BkM 00:21:22.120 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.120 [2024-12-06 18:33:16.819671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.120 [2024-12-06 18:33:16.835652] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:22.120 [2024-12-06 18:33:16.835958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.120 malloc0 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2166177 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2166177 /var/tmp/bdevperf.sock 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2166177 ']' 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:22.380 18:33:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:22.380 [2024-12-06 18:33:16.979161] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:22.380 [2024-12-06 18:33:16.979238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166177 ] 00:21:22.380 [2024-12-06 18:33:17.044974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.380 [2024-12-06 18:33:17.090423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.641 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.641 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:22.641 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BkM 00:21:22.641 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:22.901 [2024-12-06 18:33:17.546944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.901 TLSTESTn1 00:21:22.901 18:33:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:23.163 Running I/O for 10 seconds... 00:21:25.047 5340.00 IOPS, 20.86 MiB/s [2024-12-06T17:33:20.770Z] 5321.00 IOPS, 20.79 MiB/s [2024-12-06T17:33:22.149Z] 5646.00 IOPS, 22.05 MiB/s [2024-12-06T17:33:23.090Z] 5508.00 IOPS, 21.52 MiB/s [2024-12-06T17:33:24.034Z] 5519.80 IOPS, 21.56 MiB/s [2024-12-06T17:33:24.977Z] 5343.33 IOPS, 20.87 MiB/s [2024-12-06T17:33:25.918Z] 5313.57 IOPS, 20.76 MiB/s [2024-12-06T17:33:26.857Z] 5301.50 IOPS, 20.71 MiB/s [2024-12-06T17:33:27.799Z] 5403.11 IOPS, 21.11 MiB/s [2024-12-06T17:33:28.060Z] 5309.80 IOPS, 20.74 MiB/s 00:21:33.276 Latency(us) 00:21:33.276 [2024-12-06T17:33:28.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.276 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:33.276 Verification LBA range: start 0x0 length 0x2000 00:21:33.276 TLSTESTn1 : 10.05 5296.22 20.69 0.00 0.00 24097.09 5352.11 46530.56 00:21:33.276 [2024-12-06T17:33:28.060Z] =================================================================================================================== 00:21:33.276 [2024-12-06T17:33:28.060Z] Total : 5296.22 20.69 0.00 0.00 24097.09 5352.11 46530.56 00:21:33.276 { 00:21:33.276 "results": [ 00:21:33.276 { 00:21:33.276 "job": "TLSTESTn1", 00:21:33.276 "core_mask": "0x4", 00:21:33.276 "workload": "verify", 00:21:33.276 "status": "finished", 00:21:33.276 "verify_range": { 00:21:33.276 "start": 0, 00:21:33.276 "length": 8192 00:21:33.276 }, 00:21:33.276 "queue_depth": 128, 00:21:33.276 "io_size": 4096, 00:21:33.276 "runtime": 10.049801, 00:21:33.276 "iops": 5296.224273495564, 00:21:33.276 "mibps": 20.68837606834205, 00:21:33.276 "io_failed": 0, 00:21:33.276 "io_timeout": 0, 00:21:33.276 "avg_latency_us": 24097.087003594734, 00:21:33.276 "min_latency_us": 5352.106666666667, 00:21:33.276 "max_latency_us": 46530.56 00:21:33.276 } 00:21:33.276 ], 00:21:33.276 "core_count": 1 00:21:33.276 } 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:33.276 nvmf_trace.0 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2166177 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2166177 ']' 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2166177 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.276 18:33:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2166177 00:21:33.276 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:33.276 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:33.276 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2166177' 00:21:33.276 killing process with pid 2166177 00:21:33.276 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2166177 00:21:33.276 Received shutdown signal, test time was about 10.000000 seconds 00:21:33.276 00:21:33.276 Latency(us) 00:21:33.276 [2024-12-06T17:33:28.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.276 [2024-12-06T17:33:28.060Z] =================================================================================================================== 00:21:33.276 [2024-12-06T17:33:28.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.276 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2166177 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:33.537 rmmod nvme_tcp 00:21:33.537 rmmod nvme_fabrics 00:21:33.537 rmmod nvme_keyring 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2165837 ']' 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2165837 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2165837 ']' 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2165837 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2165837 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2165837' 00:21:33.537 killing process with pid 2165837 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2165837 00:21:33.537 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2165837 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.799 18:33:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BkM 00:21:35.716 00:21:35.716 real 0m22.644s 00:21:35.716 user 0m23.441s 00:21:35.716 sys 0m9.855s 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:35.716 ************************************ 00:21:35.716 END TEST nvmf_fips 00:21:35.716 ************************************ 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.716 18:33:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:35.979 ************************************ 00:21:35.979 START TEST nvmf_control_msg_list 00:21:35.979 ************************************ 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:35.979 * Looking for test storage... 00:21:35.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.979 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.980 --rc genhtml_branch_coverage=1 00:21:35.980 --rc genhtml_function_coverage=1 00:21:35.980 --rc genhtml_legend=1 00:21:35.980 --rc geninfo_all_blocks=1 00:21:35.980 --rc geninfo_unexecuted_blocks=1 00:21:35.980 00:21:35.980 ' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.980 --rc genhtml_branch_coverage=1 00:21:35.980 --rc genhtml_function_coverage=1 00:21:35.980 --rc genhtml_legend=1 00:21:35.980 --rc geninfo_all_blocks=1 00:21:35.980 --rc geninfo_unexecuted_blocks=1 00:21:35.980 00:21:35.980 ' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.980 --rc genhtml_branch_coverage=1 00:21:35.980 --rc genhtml_function_coverage=1 00:21:35.980 --rc genhtml_legend=1 00:21:35.980 --rc geninfo_all_blocks=1 00:21:35.980 --rc geninfo_unexecuted_blocks=1 00:21:35.980 00:21:35.980 ' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:35.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.980 --rc genhtml_branch_coverage=1 00:21:35.980 --rc genhtml_function_coverage=1 00:21:35.980 --rc genhtml_legend=1 00:21:35.980 --rc geninfo_all_blocks=1 00:21:35.980 --rc geninfo_unexecuted_blocks=1 00:21:35.980 00:21:35.980 ' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:35.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:35.980 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:35.981 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:36.244 18:33:30 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:44.389 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:44.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:44.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:44.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:44.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:44.390 18:33:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:44.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:21:44.390 00:21:44.390 --- 10.0.0.2 ping statistics --- 00:21:44.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.390 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:44.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:21:44.390 00:21:44.390 --- 10.0.0.1 ping statistics --- 00:21:44.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.390 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2172535 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2172535 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:44.390 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2172535 ']' 00:21:44.391 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.391 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.391 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.391 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.391 18:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.391 [2024-12-06 18:33:38.327313] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:44.391 [2024-12-06 18:33:38.327384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.391 [2024-12-06 18:33:38.425915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.391 [2024-12-06 18:33:38.475793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.391 [2024-12-06 18:33:38.475843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.391 [2024-12-06 18:33:38.475852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.391 [2024-12-06 18:33:38.475859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.391 [2024-12-06 18:33:38.475865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.391 [2024-12-06 18:33:38.476660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.391 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.391 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:44.391 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:44.391 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:44.391 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 [2024-12-06 18:33:39.179984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 Malloc0 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:44.651 [2024-12-06 18:33:39.234526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2172575 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2172577 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2172579 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2172575 00:21:44.651 18:33:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:44.651 [2024-12-06 18:33:39.345521] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:44.651 [2024-12-06 18:33:39.345947] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:44.651 [2024-12-06 18:33:39.346289] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:46.033 Initializing NVMe Controllers 00:21:46.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:46.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:46.033 Initialization complete. Launching workers. 00:21:46.033 ======================================================== 00:21:46.033 Latency(us) 00:21:46.033 Device Information : IOPS MiB/s Average min max 00:21:46.033 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1487.00 5.81 672.61 286.67 1429.65 00:21:46.033 ======================================================== 00:21:46.033 Total : 1487.00 5.81 672.61 286.67 1429.65 00:21:46.033 00:21:46.033 [2024-12-06 18:33:40.419494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a98c0 is same with the state(6) to be set 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2172577 00:21:46.033 Initializing NVMe Controllers 00:21:46.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:46.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:46.033 Initialization complete. Launching workers. 00:21:46.033 ======================================================== 00:21:46.033 Latency(us) 00:21:46.033 Device Information : IOPS MiB/s Average min max 00:21:46.033 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40910.07 40705.48 41084.07 00:21:46.033 ======================================================== 00:21:46.033 Total : 25.00 0.10 40910.07 40705.48 41084.07 00:21:46.033 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2172579 00:21:46.033 Initializing NVMe Controllers 00:21:46.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:46.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:46.033 Initialization complete. Launching workers. 00:21:46.033 ======================================================== 00:21:46.033 Latency(us) 00:21:46.033 Device Information : IOPS MiB/s Average min max 00:21:46.033 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1457.00 5.69 686.12 247.73 1315.28 00:21:46.033 ======================================================== 00:21:46.033 Total : 1457.00 5.69 686.12 247.73 1315.28 00:21:46.033 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.033 rmmod nvme_tcp 00:21:46.033 rmmod nvme_fabrics 00:21:46.033 rmmod nvme_keyring 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2172535 ']' 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2172535 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2172535 ']' 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2172535 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:46.033 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2172535 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2172535' 00:21:46.034 killing process with pid 2172535 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2172535 00:21:46.034 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2172535 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.294 18:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.204 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:48.204 00:21:48.204 real 0m12.467s 00:21:48.204 user 0m8.148s 00:21:48.204 sys 0m6.606s 00:21:48.204 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.204 18:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:48.204 ************************************ 00:21:48.204 END TEST nvmf_control_msg_list 00:21:48.204 ************************************ 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:48.464 ************************************ 00:21:48.464 START TEST nvmf_wait_for_buf 00:21:48.464 ************************************ 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:48.464 * Looking for test storage... 00:21:48.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.464 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:48.747 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.748 --rc genhtml_branch_coverage=1 00:21:48.748 --rc genhtml_function_coverage=1 00:21:48.748 --rc genhtml_legend=1 00:21:48.748 --rc geninfo_all_blocks=1 00:21:48.748 --rc geninfo_unexecuted_blocks=1 00:21:48.748 00:21:48.748 ' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.748 --rc genhtml_branch_coverage=1 00:21:48.748 --rc genhtml_function_coverage=1 00:21:48.748 --rc genhtml_legend=1 00:21:48.748 --rc geninfo_all_blocks=1 00:21:48.748 --rc geninfo_unexecuted_blocks=1 00:21:48.748 00:21:48.748 ' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.748 --rc genhtml_branch_coverage=1 00:21:48.748 --rc genhtml_function_coverage=1 00:21:48.748 --rc genhtml_legend=1 00:21:48.748 --rc geninfo_all_blocks=1 00:21:48.748 --rc geninfo_unexecuted_blocks=1 00:21:48.748 00:21:48.748 ' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:48.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.748 --rc genhtml_branch_coverage=1 00:21:48.748 --rc genhtml_function_coverage=1 00:21:48.748 --rc genhtml_legend=1 00:21:48.748 --rc geninfo_all_blocks=1 00:21:48.748 --rc geninfo_unexecuted_blocks=1 00:21:48.748 00:21:48.748 ' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:48.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:48.748 18:33:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:56.903 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:56.903 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:56.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:56.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.903 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:21:56.904 00:21:56.904 --- 10.0.0.2 ping statistics --- 00:21:56.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.904 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:21:56.904 00:21:56.904 --- 10.0.0.1 ping statistics --- 00:21:56.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.904 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2177244 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2177244 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2177244 ']' 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.904 18:33:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.904 [2024-12-06 18:33:50.925111] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:21:56.904 [2024-12-06 18:33:50.925183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.904 [2024-12-06 18:33:51.007322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.904 [2024-12-06 18:33:51.057992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.904 [2024-12-06 18:33:51.058039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.904 [2024-12-06 18:33:51.058047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.904 [2024-12-06 18:33:51.058055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.904 [2024-12-06 18:33:51.058061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.904 [2024-12-06 18:33:51.058845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.163 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.163 Malloc0 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.164 [2024-12-06 18:33:51.917034] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.164 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:57.423 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.423 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:57.423 [2024-12-06 18:33:51.953326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.423 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.423 18:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:57.423 [2024-12-06 18:33:52.059773] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:58.804 Initializing NVMe Controllers 00:21:58.804 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:58.804 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:58.804 Initialization complete. Launching workers. 00:21:58.804 ======================================================== 00:21:58.804 Latency(us) 00:21:58.804 Device Information : IOPS MiB/s Average min max 00:21:58.804 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32263.80 8004.06 63856.77 00:21:58.804 ======================================================== 00:21:58.804 Total : 129.00 16.12 32263.80 8004.06 63856.77 00:21:58.804 00:21:58.804 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:58.804 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:58.805 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:58.805 rmmod nvme_tcp 00:21:58.805 rmmod nvme_fabrics 00:21:59.065 rmmod nvme_keyring 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2177244 ']' 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2177244 ']' 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177244' 00:21:59.065 killing process with pid 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2177244 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.065 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.325 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.325 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.325 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.325 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.325 18:33:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.236 00:22:01.236 real 0m12.863s 00:22:01.236 user 0m5.323s 00:22:01.236 sys 0m6.130s 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:01.236 ************************************ 00:22:01.236 END TEST nvmf_wait_for_buf 00:22:01.236 ************************************ 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.236 18:33:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:09.377 ************************************ 00:22:09.377 START TEST nvmf_perf_adq 00:22:09.377 ************************************ 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:09.377 * Looking for test storage... 00:22:09.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:22:09.377 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:09.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.378 --rc genhtml_branch_coverage=1 00:22:09.378 --rc genhtml_function_coverage=1 00:22:09.378 --rc genhtml_legend=1 00:22:09.378 --rc geninfo_all_blocks=1 00:22:09.378 --rc geninfo_unexecuted_blocks=1 00:22:09.378 00:22:09.378 ' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:09.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.378 --rc genhtml_branch_coverage=1 00:22:09.378 --rc genhtml_function_coverage=1 00:22:09.378 --rc genhtml_legend=1 00:22:09.378 --rc geninfo_all_blocks=1 00:22:09.378 --rc geninfo_unexecuted_blocks=1 00:22:09.378 00:22:09.378 ' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:09.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.378 --rc genhtml_branch_coverage=1 00:22:09.378 --rc genhtml_function_coverage=1 00:22:09.378 --rc genhtml_legend=1 00:22:09.378 --rc geninfo_all_blocks=1 00:22:09.378 --rc geninfo_unexecuted_blocks=1 00:22:09.378 00:22:09.378 ' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:09.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:09.378 --rc genhtml_branch_coverage=1 00:22:09.378 --rc genhtml_function_coverage=1 00:22:09.378 --rc genhtml_legend=1 00:22:09.378 --rc geninfo_all_blocks=1 00:22:09.378 --rc geninfo_unexecuted_blocks=1 00:22:09.378 00:22:09.378 ' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:09.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:09.378 18:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:15.963 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.963 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.963 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.963 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.964 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.964 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:15.964 18:34:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:17.347 18:34:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:19.262 18:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:24.552 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:24.553 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:24.553 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:24.553 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:24.553 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.553 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:24.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:22:24.815 00:22:24.815 --- 10.0.0.2 ping statistics --- 00:22:24.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.815 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:24.815 00:22:24.815 --- 10.0.0.1 ping statistics --- 00:22:24.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.815 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:24.815 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2187388 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2187388 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2187388 ']' 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.816 18:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.816 [2024-12-06 18:34:19.470985] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:22:24.816 [2024-12-06 18:34:19.471054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.816 [2024-12-06 18:34:19.571821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.077 [2024-12-06 18:34:19.626745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.077 [2024-12-06 18:34:19.626798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.077 [2024-12-06 18:34:19.626808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.077 [2024-12-06 18:34:19.626815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.077 [2024-12-06 18:34:19.626821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.077 [2024-12-06 18:34:19.628763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.077 [2024-12-06 18:34:19.628926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.077 [2024-12-06 18:34:19.629090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.077 [2024-12-06 18:34:19.629091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.653 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 [2024-12-06 18:34:20.498932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 Malloc1 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.922 [2024-12-06 18:34:20.572509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2187514 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:25.922 18:34:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:27.840 "tick_rate": 2400000000, 00:22:27.840 "poll_groups": [ 00:22:27.840 { 00:22:27.840 "name": "nvmf_tgt_poll_group_000", 00:22:27.840 "admin_qpairs": 1, 00:22:27.840 "io_qpairs": 1, 00:22:27.840 "current_admin_qpairs": 1, 00:22:27.840 "current_io_qpairs": 1, 00:22:27.840 "pending_bdev_io": 0, 00:22:27.840 "completed_nvme_io": 17168, 00:22:27.840 "transports": [ 00:22:27.840 { 00:22:27.840 "trtype": "TCP" 00:22:27.840 } 00:22:27.840 ] 00:22:27.840 }, 00:22:27.840 { 00:22:27.840 "name": "nvmf_tgt_poll_group_001", 00:22:27.840 "admin_qpairs": 0, 00:22:27.840 "io_qpairs": 1, 00:22:27.840 "current_admin_qpairs": 0, 00:22:27.840 "current_io_qpairs": 1, 00:22:27.840 "pending_bdev_io": 0, 00:22:27.840 "completed_nvme_io": 18971, 00:22:27.840 "transports": [ 00:22:27.840 { 00:22:27.840 "trtype": "TCP" 00:22:27.840 } 00:22:27.840 ] 00:22:27.840 }, 00:22:27.840 { 00:22:27.840 "name": "nvmf_tgt_poll_group_002", 00:22:27.840 "admin_qpairs": 0, 00:22:27.840 "io_qpairs": 1, 00:22:27.840 "current_admin_qpairs": 0, 00:22:27.840 "current_io_qpairs": 1, 00:22:27.840 "pending_bdev_io": 0, 00:22:27.840 "completed_nvme_io": 19049, 00:22:27.840 "transports": [ 00:22:27.840 { 00:22:27.840 "trtype": "TCP" 00:22:27.840 } 00:22:27.840 ] 00:22:27.840 }, 00:22:27.840 { 00:22:27.840 "name": "nvmf_tgt_poll_group_003", 00:22:27.840 "admin_qpairs": 0, 00:22:27.840 "io_qpairs": 1, 00:22:27.840 "current_admin_qpairs": 0, 00:22:27.840 "current_io_qpairs": 1, 00:22:27.840 "pending_bdev_io": 0, 00:22:27.840 "completed_nvme_io": 16893, 00:22:27.840 "transports": [ 00:22:27.840 { 00:22:27.840 "trtype": "TCP" 00:22:27.840 } 00:22:27.840 ] 00:22:27.840 } 00:22:27.840 ] 00:22:27.840 }' 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:27.840 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:28.103 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:28.103 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:28.103 18:34:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2187514 00:22:36.336 Initializing NVMe Controllers 00:22:36.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:36.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:36.336 Initialization complete. Launching workers. 00:22:36.336 ======================================================== 00:22:36.336 Latency(us) 00:22:36.336 Device Information : IOPS MiB/s Average min max 00:22:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12488.21 48.78 5125.27 1221.30 11033.16 00:22:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13625.70 53.23 4696.79 1226.30 12900.43 00:22:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13726.10 53.62 4664.02 1267.38 12990.20 00:22:36.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12820.50 50.08 4992.51 1198.41 13584.94 00:22:36.336 ======================================================== 00:22:36.336 Total : 52660.50 205.71 4861.85 1198.41 13584.94 00:22:36.336 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.336 rmmod nvme_tcp 00:22:36.336 rmmod nvme_fabrics 00:22:36.336 rmmod nvme_keyring 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2187388 ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2187388 ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2187388' 00:22:36.336 killing process with pid 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2187388 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.336 18:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.325 18:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:38.325 18:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:38.325 18:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:38.325 18:34:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:39.710 18:34:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:42.250 18:34:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:47.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:47.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.535 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:47.536 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:47.536 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:47.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:22:47.536 00:22:47.536 --- 10.0.0.2 ping statistics --- 00:22:47.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.536 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:47.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:47.536 00:22:47.536 --- 10.0.0.1 ping statistics --- 00:22:47.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.536 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:47.536 net.core.busy_poll = 1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:47.536 net.core.busy_read = 1 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:47.536 18:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2192158 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2192158 00:22:47.536 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2192158 ']' 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.537 18:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.537 [2024-12-06 18:34:42.240543] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:22:47.537 [2024-12-06 18:34:42.240619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.799 [2024-12-06 18:34:42.340354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:47.799 [2024-12-06 18:34:42.393545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.799 [2024-12-06 18:34:42.393602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.799 [2024-12-06 18:34:42.393611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.799 [2024-12-06 18:34:42.393619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.799 [2024-12-06 18:34:42.393625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.799 [2024-12-06 18:34:42.395686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.799 [2024-12-06 18:34:42.395907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.799 [2024-12-06 18:34:42.396070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:47.799 [2024-12-06 18:34:42.396072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.373 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 [2024-12-06 18:34:43.257703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 Malloc1 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:48.634 [2024-12-06 18:34:43.334619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2192349 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:48.634 18:34:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:51.179 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:51.179 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.179 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:51.179 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.179 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:51.179 "tick_rate": 2400000000, 00:22:51.179 "poll_groups": [ 00:22:51.179 { 00:22:51.179 "name": "nvmf_tgt_poll_group_000", 00:22:51.179 "admin_qpairs": 1, 00:22:51.179 "io_qpairs": 0, 00:22:51.179 "current_admin_qpairs": 1, 00:22:51.179 "current_io_qpairs": 0, 00:22:51.179 "pending_bdev_io": 0, 00:22:51.179 "completed_nvme_io": 0, 00:22:51.179 "transports": [ 00:22:51.179 { 00:22:51.179 "trtype": "TCP" 00:22:51.179 } 00:22:51.179 ] 00:22:51.179 }, 00:22:51.179 { 00:22:51.179 "name": "nvmf_tgt_poll_group_001", 00:22:51.179 "admin_qpairs": 0, 00:22:51.179 "io_qpairs": 4, 00:22:51.179 "current_admin_qpairs": 0, 00:22:51.179 "current_io_qpairs": 4, 00:22:51.179 "pending_bdev_io": 0, 00:22:51.179 "completed_nvme_io": 33656, 00:22:51.179 "transports": [ 00:22:51.179 { 00:22:51.179 "trtype": "TCP" 00:22:51.179 } 00:22:51.179 ] 00:22:51.179 }, 00:22:51.179 { 00:22:51.179 "name": "nvmf_tgt_poll_group_002", 00:22:51.179 "admin_qpairs": 0, 00:22:51.179 "io_qpairs": 0, 00:22:51.179 "current_admin_qpairs": 0, 00:22:51.179 "current_io_qpairs": 0, 00:22:51.179 "pending_bdev_io": 0, 00:22:51.179 "completed_nvme_io": 0, 00:22:51.179 "transports": [ 00:22:51.179 { 00:22:51.179 "trtype": "TCP" 00:22:51.179 } 00:22:51.179 ] 00:22:51.179 }, 00:22:51.180 { 00:22:51.180 "name": "nvmf_tgt_poll_group_003", 00:22:51.180 "admin_qpairs": 0, 00:22:51.180 "io_qpairs": 0, 00:22:51.180 "current_admin_qpairs": 0, 00:22:51.180 "current_io_qpairs": 0, 00:22:51.180 "pending_bdev_io": 0, 00:22:51.180 "completed_nvme_io": 0, 00:22:51.180 "transports": [ 00:22:51.180 { 00:22:51.180 "trtype": "TCP" 00:22:51.180 } 00:22:51.180 ] 00:22:51.180 } 00:22:51.180 ] 00:22:51.180 }' 00:22:51.180 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:51.180 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:51.180 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:51.180 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:51.180 18:34:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2192349 00:22:59.317 Initializing NVMe Controllers 00:22:59.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:59.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:59.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:59.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:59.317 Initialization complete. Launching workers. 00:22:59.317 ======================================================== 00:22:59.317 Latency(us) 00:22:59.318 Device Information : IOPS MiB/s Average min max 00:22:59.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5767.70 22.53 11111.03 1180.79 61336.27 00:22:59.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5736.50 22.41 11194.33 1196.03 57852.51 00:22:59.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6726.20 26.27 9517.19 1392.00 55629.09 00:22:59.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6339.60 24.76 10098.30 1383.44 59629.77 00:22:59.318 ======================================================== 00:22:59.318 Total : 24570.00 95.98 10432.85 1180.79 61336.27 00:22:59.318 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.318 rmmod nvme_tcp 00:22:59.318 rmmod nvme_fabrics 00:22:59.318 rmmod nvme_keyring 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2192158 ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2192158 ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2192158' 00:22:59.318 killing process with pid 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2192158 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.318 18:34:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:02.616 00:23:02.616 real 0m53.718s 00:23:02.616 user 2m49.635s 00:23:02.616 sys 0m11.793s 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.616 ************************************ 00:23:02.616 END TEST nvmf_perf_adq 00:23:02.616 ************************************ 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:02.616 ************************************ 00:23:02.616 START TEST nvmf_shutdown 00:23:02.616 ************************************ 00:23:02.616 18:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:02.616 * Looking for test storage... 00:23:02.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.616 --rc genhtml_branch_coverage=1 00:23:02.616 --rc genhtml_function_coverage=1 00:23:02.616 --rc genhtml_legend=1 00:23:02.616 --rc geninfo_all_blocks=1 00:23:02.616 --rc geninfo_unexecuted_blocks=1 00:23:02.616 00:23:02.616 ' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.616 --rc genhtml_branch_coverage=1 00:23:02.616 --rc genhtml_function_coverage=1 00:23:02.616 --rc genhtml_legend=1 00:23:02.616 --rc geninfo_all_blocks=1 00:23:02.616 --rc geninfo_unexecuted_blocks=1 00:23:02.616 00:23:02.616 ' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.616 --rc genhtml_branch_coverage=1 00:23:02.616 --rc genhtml_function_coverage=1 00:23:02.616 --rc genhtml_legend=1 00:23:02.616 --rc geninfo_all_blocks=1 00:23:02.616 --rc geninfo_unexecuted_blocks=1 00:23:02.616 00:23:02.616 ' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.616 --rc genhtml_branch_coverage=1 00:23:02.616 --rc genhtml_function_coverage=1 00:23:02.616 --rc genhtml_legend=1 00:23:02.616 --rc geninfo_all_blocks=1 00:23:02.616 --rc geninfo_unexecuted_blocks=1 00:23:02.616 00:23:02.616 ' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:02.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:02.616 ************************************ 00:23:02.616 START TEST nvmf_shutdown_tc1 00:23:02.616 ************************************ 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:02.616 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:02.617 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:02.617 18:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:10.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:10.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:10.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:10.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.757 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:23:10.758 00:23:10.758 --- 10.0.0.2 ping statistics --- 00:23:10.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.758 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:23:10.758 00:23:10.758 --- 10.0.0.1 ping statistics --- 00:23:10.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.758 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2198813 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2198813 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2198813 ']' 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.758 18:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.758 [2024-12-06 18:35:04.926717] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:10.758 [2024-12-06 18:35:04.926795] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.758 [2024-12-06 18:35:05.031766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.758 [2024-12-06 18:35:05.087831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.758 [2024-12-06 18:35:05.087888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.758 [2024-12-06 18:35:05.087897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.758 [2024-12-06 18:35:05.087904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.758 [2024-12-06 18:35:05.087910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.758 [2024-12-06 18:35:05.090248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.758 [2024-12-06 18:35:05.090450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.758 [2024-12-06 18:35:05.090950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:10.758 [2024-12-06 18:35:05.091048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.018 [2024-12-06 18:35:05.785062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.018 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.278 18:35:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.278 Malloc1 00:23:11.278 [2024-12-06 18:35:05.913389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.278 Malloc2 00:23:11.278 Malloc3 00:23:11.278 Malloc4 00:23:11.538 Malloc5 00:23:11.538 Malloc6 00:23:11.538 Malloc7 00:23:11.538 Malloc8 00:23:11.538 Malloc9 00:23:11.538 Malloc10 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2199197 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2199197 /var/tmp/bdevperf.sock 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2199197 ']' 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.799 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 [2024-12-06 18:35:06.431358] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:11.800 [2024-12-06 18:35:06.431429] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.800 } 00:23:11.800 EOF 00:23:11.800 )") 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.800 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.800 { 00:23:11.800 "params": { 00:23:11.800 "name": "Nvme$subsystem", 00:23:11.800 "trtype": "$TEST_TRANSPORT", 00:23:11.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.800 "adrfam": "ipv4", 00:23:11.800 "trsvcid": "$NVMF_PORT", 00:23:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.800 "hdgst": ${hdgst:-false}, 00:23:11.800 "ddgst": ${ddgst:-false} 00:23:11.800 }, 00:23:11.800 "method": "bdev_nvme_attach_controller" 00:23:11.801 } 00:23:11.801 EOF 00:23:11.801 )") 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:11.801 { 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme$subsystem", 00:23:11.801 "trtype": "$TEST_TRANSPORT", 00:23:11.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "$NVMF_PORT", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:11.801 "hdgst": ${hdgst:-false}, 00:23:11.801 "ddgst": ${ddgst:-false} 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 } 00:23:11.801 EOF 00:23:11.801 )") 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:11.801 18:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme1", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme2", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme3", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme4", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme5", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme6", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme7", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme8", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme9", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 },{ 00:23:11.801 "params": { 00:23:11.801 "name": "Nvme10", 00:23:11.801 "trtype": "tcp", 00:23:11.801 "traddr": "10.0.0.2", 00:23:11.801 "adrfam": "ipv4", 00:23:11.801 "trsvcid": "4420", 00:23:11.801 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:11.801 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:11.801 "hdgst": false, 00:23:11.801 "ddgst": false 00:23:11.801 }, 00:23:11.801 "method": "bdev_nvme_attach_controller" 00:23:11.801 }' 00:23:11.801 [2024-12-06 18:35:06.529167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.801 [2024-12-06 18:35:06.582015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2199197 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:13.187 18:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:14.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2199197 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2198813 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.574 "adrfam": "ipv4", 00:23:14.574 "trsvcid": "$NVMF_PORT", 00:23:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.574 "hdgst": ${hdgst:-false}, 00:23:14.574 "ddgst": ${ddgst:-false} 00:23:14.574 }, 00:23:14.574 "method": "bdev_nvme_attach_controller" 00:23:14.574 } 00:23:14.574 EOF 00:23:14.574 )") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.574 "adrfam": "ipv4", 00:23:14.574 "trsvcid": "$NVMF_PORT", 00:23:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.574 "hdgst": ${hdgst:-false}, 00:23:14.574 "ddgst": ${ddgst:-false} 00:23:14.574 }, 00:23:14.574 "method": "bdev_nvme_attach_controller" 00:23:14.574 } 00:23:14.574 EOF 00:23:14.574 )") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.574 "adrfam": "ipv4", 00:23:14.574 "trsvcid": "$NVMF_PORT", 00:23:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.574 "hdgst": ${hdgst:-false}, 00:23:14.574 "ddgst": ${ddgst:-false} 00:23:14.574 }, 00:23:14.574 "method": "bdev_nvme_attach_controller" 00:23:14.574 } 00:23:14.574 EOF 00:23:14.574 )") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.574 "adrfam": "ipv4", 00:23:14.574 "trsvcid": "$NVMF_PORT", 00:23:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.574 "hdgst": ${hdgst:-false}, 00:23:14.574 "ddgst": ${ddgst:-false} 00:23:14.574 }, 00:23:14.574 "method": "bdev_nvme_attach_controller" 00:23:14.574 } 00:23:14.574 EOF 00:23:14.574 )") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.574 "adrfam": "ipv4", 00:23:14.574 "trsvcid": "$NVMF_PORT", 00:23:14.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.574 "hdgst": ${hdgst:-false}, 00:23:14.574 "ddgst": ${ddgst:-false} 00:23:14.574 }, 00:23:14.574 "method": "bdev_nvme_attach_controller" 00:23:14.574 } 00:23:14.574 EOF 00:23:14.574 )") 00:23:14.574 18:35:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.574 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.574 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.574 { 00:23:14.574 "params": { 00:23:14.574 "name": "Nvme$subsystem", 00:23:14.574 "trtype": "$TEST_TRANSPORT", 00:23:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "$NVMF_PORT", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.575 "hdgst": ${hdgst:-false}, 00:23:14.575 "ddgst": ${ddgst:-false} 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 } 00:23:14.575 EOF 00:23:14.575 )") 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.575 { 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme$subsystem", 00:23:14.575 "trtype": "$TEST_TRANSPORT", 00:23:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "$NVMF_PORT", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.575 "hdgst": ${hdgst:-false}, 00:23:14.575 "ddgst": ${ddgst:-false} 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 } 00:23:14.575 EOF 00:23:14.575 )") 00:23:14.575 [2024-12-06 18:35:09.012595] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:14.575 [2024-12-06 18:35:09.012653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199763 ] 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.575 { 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme$subsystem", 00:23:14.575 "trtype": "$TEST_TRANSPORT", 00:23:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "$NVMF_PORT", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.575 "hdgst": ${hdgst:-false}, 00:23:14.575 "ddgst": ${ddgst:-false} 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 } 00:23:14.575 EOF 00:23:14.575 )") 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.575 { 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme$subsystem", 00:23:14.575 "trtype": "$TEST_TRANSPORT", 00:23:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "$NVMF_PORT", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.575 "hdgst": ${hdgst:-false}, 00:23:14.575 "ddgst": ${ddgst:-false} 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 } 00:23:14.575 EOF 00:23:14.575 )") 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:14.575 { 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme$subsystem", 00:23:14.575 "trtype": "$TEST_TRANSPORT", 00:23:14.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "$NVMF_PORT", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.575 "hdgst": ${hdgst:-false}, 00:23:14.575 "ddgst": ${ddgst:-false} 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 } 00:23:14.575 EOF 00:23:14.575 )") 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:14.575 18:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme1", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme2", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme3", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme4", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme5", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme6", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme7", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme8", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme9", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 },{ 00:23:14.575 "params": { 00:23:14.575 "name": "Nvme10", 00:23:14.575 "trtype": "tcp", 00:23:14.575 "traddr": "10.0.0.2", 00:23:14.575 "adrfam": "ipv4", 00:23:14.575 "trsvcid": "4420", 00:23:14.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:14.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:14.575 "hdgst": false, 00:23:14.575 "ddgst": false 00:23:14.575 }, 00:23:14.575 "method": "bdev_nvme_attach_controller" 00:23:14.575 }' 00:23:14.575 [2024-12-06 18:35:09.099751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.575 [2024-12-06 18:35:09.135990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.959 Running I/O for 1 seconds... 00:23:17.340 1861.00 IOPS, 116.31 MiB/s 00:23:17.340 Latency(us) 00:23:17.340 [2024-12-06T17:35:12.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.340 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme1n1 : 1.19 214.48 13.41 0.00 0.00 293239.47 18131.63 253405.87 00:23:17.340 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme2n1 : 1.12 228.80 14.30 0.00 0.00 271184.21 24029.87 244667.73 00:23:17.340 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme3n1 : 1.07 240.32 15.02 0.00 0.00 253776.00 9939.63 262144.00 00:23:17.340 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme4n1 : 1.13 226.01 14.13 0.00 0.00 265850.03 21299.20 248162.99 00:23:17.340 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme5n1 : 1.17 219.23 13.70 0.00 0.00 269669.97 20425.39 251658.24 00:23:17.340 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme6n1 : 1.16 220.34 13.77 0.00 0.00 263260.16 19005.44 242920.11 00:23:17.340 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme7n1 : 1.20 266.84 16.68 0.00 0.00 214184.62 22173.01 249910.61 00:23:17.340 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme8n1 : 1.20 265.98 16.62 0.00 0.00 211107.84 19223.89 244667.73 00:23:17.340 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme9n1 : 1.21 265.15 16.57 0.00 0.00 207967.06 14090.24 251658.24 00:23:17.340 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:17.340 Verification LBA range: start 0x0 length 0x400 00:23:17.340 Nvme10n1 : 1.22 263.32 16.46 0.00 0.00 205526.29 2880.85 269134.51 00:23:17.340 [2024-12-06T17:35:12.124Z] =================================================================================================================== 00:23:17.340 [2024-12-06T17:35:12.124Z] Total : 2410.48 150.65 0.00 0.00 242314.74 2880.85 269134.51 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.340 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.340 rmmod nvme_tcp 00:23:17.600 rmmod nvme_fabrics 00:23:17.600 rmmod nvme_keyring 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2198813 ']' 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2198813 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2198813 ']' 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2198813 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198813 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198813' 00:23:17.600 killing process with pid 2198813 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2198813 00:23:17.600 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2198813 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.860 18:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.771 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.771 00:23:19.771 real 0m17.290s 00:23:19.771 user 0m35.876s 00:23:19.771 sys 0m6.990s 00:23:19.771 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.771 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.771 ************************************ 00:23:19.771 END TEST nvmf_shutdown_tc1 00:23:19.771 ************************************ 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:20.032 ************************************ 00:23:20.032 START TEST nvmf_shutdown_tc2 00:23:20.032 ************************************ 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.032 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:20.033 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:20.033 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:20.033 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:20.033 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.033 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:20.295 00:23:20.295 --- 10.0.0.2 ping statistics --- 00:23:20.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.295 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:23:20.295 00:23:20.295 --- 10.0.0.1 ping statistics --- 00:23:20.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.295 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.295 18:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2201003 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2201003 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2201003 ']' 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.295 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:20.295 [2024-12-06 18:35:15.067663] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:20.295 [2024-12-06 18:35:15.067726] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.555 [2024-12-06 18:35:15.162818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.555 [2024-12-06 18:35:15.196905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.555 [2024-12-06 18:35:15.196939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.555 [2024-12-06 18:35:15.196944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.555 [2024-12-06 18:35:15.196950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.555 [2024-12-06 18:35:15.196955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.555 [2024-12-06 18:35:15.198507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.555 [2024-12-06 18:35:15.198678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.555 [2024-12-06 18:35:15.198838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.555 [2024-12-06 18:35:15.198839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:21.125 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.125 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:21.125 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.125 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.125 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.385 [2024-12-06 18:35:15.918423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.385 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.386 18:35:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.386 Malloc1 00:23:21.386 [2024-12-06 18:35:16.034519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.386 Malloc2 00:23:21.386 Malloc3 00:23:21.386 Malloc4 00:23:21.386 Malloc5 00:23:21.647 Malloc6 00:23:21.647 Malloc7 00:23:21.647 Malloc8 00:23:21.647 Malloc9 00:23:21.648 Malloc10 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2201383 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2201383 /var/tmp/bdevperf.sock 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2201383 ']' 00:23:21.648 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.908 { 00:23:21.908 "params": { 00:23:21.908 "name": "Nvme$subsystem", 00:23:21.908 "trtype": "$TEST_TRANSPORT", 00:23:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.908 "adrfam": "ipv4", 00:23:21.908 "trsvcid": "$NVMF_PORT", 00:23:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.908 "hdgst": ${hdgst:-false}, 00:23:21.908 "ddgst": ${ddgst:-false} 00:23:21.908 }, 00:23:21.908 "method": "bdev_nvme_attach_controller" 00:23:21.908 } 00:23:21.908 EOF 00:23:21.908 )") 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.908 { 00:23:21.908 "params": { 00:23:21.908 "name": "Nvme$subsystem", 00:23:21.908 "trtype": "$TEST_TRANSPORT", 00:23:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.908 "adrfam": "ipv4", 00:23:21.908 "trsvcid": "$NVMF_PORT", 00:23:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.908 "hdgst": ${hdgst:-false}, 00:23:21.908 "ddgst": ${ddgst:-false} 00:23:21.908 }, 00:23:21.908 "method": "bdev_nvme_attach_controller" 00:23:21.908 } 00:23:21.908 EOF 00:23:21.908 )") 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.908 { 00:23:21.908 "params": { 00:23:21.908 "name": "Nvme$subsystem", 00:23:21.908 "trtype": "$TEST_TRANSPORT", 00:23:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.908 "adrfam": "ipv4", 00:23:21.908 "trsvcid": "$NVMF_PORT", 00:23:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.908 "hdgst": ${hdgst:-false}, 00:23:21.908 "ddgst": ${ddgst:-false} 00:23:21.908 }, 00:23:21.908 "method": "bdev_nvme_attach_controller" 00:23:21.908 } 00:23:21.908 EOF 00:23:21.908 )") 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.908 { 00:23:21.908 "params": { 00:23:21.908 "name": "Nvme$subsystem", 00:23:21.908 "trtype": "$TEST_TRANSPORT", 00:23:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.908 "adrfam": "ipv4", 00:23:21.908 "trsvcid": "$NVMF_PORT", 00:23:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.908 "hdgst": ${hdgst:-false}, 00:23:21.908 "ddgst": ${ddgst:-false} 00:23:21.908 }, 00:23:21.908 "method": "bdev_nvme_attach_controller" 00:23:21.908 } 00:23:21.908 EOF 00:23:21.908 )") 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.908 { 00:23:21.908 "params": { 00:23:21.908 "name": "Nvme$subsystem", 00:23:21.908 "trtype": "$TEST_TRANSPORT", 00:23:21.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.908 "adrfam": "ipv4", 00:23:21.908 "trsvcid": "$NVMF_PORT", 00:23:21.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.908 "hdgst": ${hdgst:-false}, 00:23:21.908 "ddgst": ${ddgst:-false} 00:23:21.908 }, 00:23:21.908 "method": "bdev_nvme_attach_controller" 00:23:21.908 } 00:23:21.908 EOF 00:23:21.908 )") 00:23:21.908 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.909 { 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme$subsystem", 00:23:21.909 "trtype": "$TEST_TRANSPORT", 00:23:21.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "$NVMF_PORT", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.909 "hdgst": ${hdgst:-false}, 00:23:21.909 "ddgst": ${ddgst:-false} 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 } 00:23:21.909 EOF 00:23:21.909 )") 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 [2024-12-06 18:35:16.479058] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:21.909 [2024-12-06 18:35:16.479111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201383 ] 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.909 { 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme$subsystem", 00:23:21.909 "trtype": "$TEST_TRANSPORT", 00:23:21.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "$NVMF_PORT", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.909 "hdgst": ${hdgst:-false}, 00:23:21.909 "ddgst": ${ddgst:-false} 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 } 00:23:21.909 EOF 00:23:21.909 )") 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.909 { 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme$subsystem", 00:23:21.909 "trtype": "$TEST_TRANSPORT", 00:23:21.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "$NVMF_PORT", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.909 "hdgst": ${hdgst:-false}, 00:23:21.909 "ddgst": ${ddgst:-false} 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 } 00:23:21.909 EOF 00:23:21.909 )") 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.909 { 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme$subsystem", 00:23:21.909 "trtype": "$TEST_TRANSPORT", 00:23:21.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "$NVMF_PORT", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.909 "hdgst": ${hdgst:-false}, 00:23:21.909 "ddgst": ${ddgst:-false} 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 } 00:23:21.909 EOF 00:23:21.909 )") 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:21.909 { 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme$subsystem", 00:23:21.909 "trtype": "$TEST_TRANSPORT", 00:23:21.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "$NVMF_PORT", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.909 "hdgst": ${hdgst:-false}, 00:23:21.909 "ddgst": ${ddgst:-false} 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 } 00:23:21.909 EOF 00:23:21.909 )") 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:21.909 18:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme1", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme2", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme3", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme4", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme5", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme6", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme7", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme8", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme9", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 },{ 00:23:21.909 "params": { 00:23:21.909 "name": "Nvme10", 00:23:21.909 "trtype": "tcp", 00:23:21.909 "traddr": "10.0.0.2", 00:23:21.909 "adrfam": "ipv4", 00:23:21.909 "trsvcid": "4420", 00:23:21.909 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:21.909 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:21.909 "hdgst": false, 00:23:21.909 "ddgst": false 00:23:21.909 }, 00:23:21.909 "method": "bdev_nvme_attach_controller" 00:23:21.909 }' 00:23:21.909 [2024-12-06 18:35:16.567824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.909 [2024-12-06 18:35:16.604377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.289 Running I/O for 10 seconds... 00:23:23.289 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.289 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:23.289 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:23.289 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.289 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:23.550 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:23.809 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.068 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2201383 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2201383 ']' 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2201383 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201383 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201383' 00:23:24.329 killing process with pid 2201383 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2201383 00:23:24.329 18:35:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2201383 00:23:24.329 2320.00 IOPS, 145.00 MiB/s [2024-12-06T17:35:19.113Z] Received shutdown signal, test time was about 1.021208 seconds 00:23:24.329 00:23:24.329 Latency(us) 00:23:24.329 [2024-12-06T17:35:19.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.329 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme1n1 : 0.96 199.51 12.47 0.00 0.00 317026.42 18459.31 248162.99 00:23:24.329 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme2n1 : 0.97 196.98 12.31 0.00 0.00 314701.65 20862.29 255153.49 00:23:24.329 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme3n1 : 0.99 262.07 16.38 0.00 0.00 231373.33 1938.77 246415.36 00:23:24.329 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme4n1 : 0.98 261.18 16.32 0.00 0.00 227532.80 18022.40 246415.36 00:23:24.329 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme5n1 : 0.99 261.63 16.35 0.00 0.00 222072.10 2307.41 244667.73 00:23:24.329 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme6n1 : 1.02 250.90 15.68 0.00 0.00 227032.11 7536.64 248162.99 00:23:24.329 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.329 Nvme7n1 : 0.99 263.59 16.47 0.00 0.00 210421.81 4887.89 221948.59 00:23:24.329 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.329 Verification LBA range: start 0x0 length 0x400 00:23:24.330 Nvme8n1 : 0.98 260.00 16.25 0.00 0.00 209297.71 16820.91 248162.99 00:23:24.330 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.330 Verification LBA range: start 0x0 length 0x400 00:23:24.330 Nvme9n1 : 0.97 197.73 12.36 0.00 0.00 268156.02 15947.09 270882.13 00:23:24.330 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.330 Verification LBA range: start 0x0 length 0x400 00:23:24.330 Nvme10n1 : 1.00 256.04 16.00 0.00 0.00 202628.48 16820.91 244667.73 00:23:24.330 [2024-12-06T17:35:19.114Z] =================================================================================================================== 00:23:24.330 [2024-12-06T17:35:19.114Z] Total : 2409.62 150.60 0.00 0.00 238321.41 1938.77 270882.13 00:23:24.591 18:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2201003 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:25.535 rmmod nvme_tcp 00:23:25.535 rmmod nvme_fabrics 00:23:25.535 rmmod nvme_keyring 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2201003 ']' 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2201003 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2201003 ']' 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2201003 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:25.535 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2201003 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2201003' 00:23:25.797 killing process with pid 2201003 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2201003 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2201003 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.797 18:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:28.345 00:23:28.345 real 0m7.987s 00:23:28.345 user 0m24.242s 00:23:28.345 sys 0m1.354s 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.345 ************************************ 00:23:28.345 END TEST nvmf_shutdown_tc2 00:23:28.345 ************************************ 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:28.345 ************************************ 00:23:28.345 START TEST nvmf_shutdown_tc3 00:23:28.345 ************************************ 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:28.345 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:28.345 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.345 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:28.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:28.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:23:28.346 00:23:28.346 --- 10.0.0.2 ping statistics --- 00:23:28.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.346 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:23:28.346 18:35:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:28.346 00:23:28.346 --- 10.0.0.1 ping statistics --- 00:23:28.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.346 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2202756 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2202756 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2202756 ']' 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.346 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.346 [2024-12-06 18:35:23.111575] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:28.346 [2024-12-06 18:35:23.111626] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.608 [2024-12-06 18:35:23.177729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.608 [2024-12-06 18:35:23.207452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.608 [2024-12-06 18:35:23.207479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.608 [2024-12-06 18:35:23.207485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.608 [2024-12-06 18:35:23.207489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.608 [2024-12-06 18:35:23.207494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.608 [2024-12-06 18:35:23.208673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.608 [2024-12-06 18:35:23.208832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.608 [2024-12-06 18:35:23.208975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.608 [2024-12-06 18:35:23.208976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 [2024-12-06 18:35:23.344377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.608 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.609 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.609 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.609 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.609 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.868 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.868 Malloc1 00:23:28.868 [2024-12-06 18:35:23.450953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.868 Malloc2 00:23:28.868 Malloc3 00:23:28.868 Malloc4 00:23:28.868 Malloc5 00:23:28.868 Malloc6 00:23:29.128 Malloc7 00:23:29.128 Malloc8 00:23:29.128 Malloc9 00:23:29.128 Malloc10 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2202906 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2202906 /var/tmp/bdevperf.sock 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2202906 ']' 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.128 { 00:23:29.128 "params": { 00:23:29.128 "name": "Nvme$subsystem", 00:23:29.128 "trtype": "$TEST_TRANSPORT", 00:23:29.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.128 "adrfam": "ipv4", 00:23:29.128 "trsvcid": "$NVMF_PORT", 00:23:29.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.128 "hdgst": ${hdgst:-false}, 00:23:29.128 "ddgst": ${ddgst:-false} 00:23:29.128 }, 00:23:29.128 "method": "bdev_nvme_attach_controller" 00:23:29.128 } 00:23:29.128 EOF 00:23:29.128 )") 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.128 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.128 { 00:23:29.128 "params": { 00:23:29.128 "name": "Nvme$subsystem", 00:23:29.128 "trtype": "$TEST_TRANSPORT", 00:23:29.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.128 "adrfam": "ipv4", 00:23:29.128 "trsvcid": "$NVMF_PORT", 00:23:29.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.128 "hdgst": ${hdgst:-false}, 00:23:29.128 "ddgst": ${ddgst:-false} 00:23:29.128 }, 00:23:29.128 "method": "bdev_nvme_attach_controller" 00:23:29.128 } 00:23:29.128 EOF 00:23:29.128 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 [2024-12-06 18:35:23.894977] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:29.129 [2024-12-06 18:35:23.895026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202906 ] 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.129 { 00:23:29.129 "params": { 00:23:29.129 "name": "Nvme$subsystem", 00:23:29.129 "trtype": "$TEST_TRANSPORT", 00:23:29.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.129 "adrfam": "ipv4", 00:23:29.129 "trsvcid": "$NVMF_PORT", 00:23:29.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.129 "hdgst": ${hdgst:-false}, 00:23:29.129 "ddgst": ${ddgst:-false} 00:23:29.129 }, 00:23:29.129 "method": "bdev_nvme_attach_controller" 00:23:29.129 } 00:23:29.129 EOF 00:23:29.129 )") 00:23:29.129 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.389 { 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme$subsystem", 00:23:29.389 "trtype": "$TEST_TRANSPORT", 00:23:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "$NVMF_PORT", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.389 "hdgst": ${hdgst:-false}, 00:23:29.389 "ddgst": ${ddgst:-false} 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 } 00:23:29.389 EOF 00:23:29.389 )") 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.389 { 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme$subsystem", 00:23:29.389 "trtype": "$TEST_TRANSPORT", 00:23:29.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "$NVMF_PORT", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.389 "hdgst": ${hdgst:-false}, 00:23:29.389 "ddgst": ${ddgst:-false} 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 } 00:23:29.389 EOF 00:23:29.389 )") 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:29.389 18:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme1", 00:23:29.389 "trtype": "tcp", 00:23:29.389 "traddr": "10.0.0.2", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "4420", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.389 "hdgst": false, 00:23:29.389 "ddgst": false 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 },{ 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme2", 00:23:29.389 "trtype": "tcp", 00:23:29.389 "traddr": "10.0.0.2", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "4420", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.389 "hdgst": false, 00:23:29.389 "ddgst": false 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 },{ 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme3", 00:23:29.389 "trtype": "tcp", 00:23:29.389 "traddr": "10.0.0.2", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "4420", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.389 "hdgst": false, 00:23:29.389 "ddgst": false 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 },{ 00:23:29.389 "params": { 00:23:29.389 "name": "Nvme4", 00:23:29.389 "trtype": "tcp", 00:23:29.389 "traddr": "10.0.0.2", 00:23:29.389 "adrfam": "ipv4", 00:23:29.389 "trsvcid": "4420", 00:23:29.389 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.389 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.389 "hdgst": false, 00:23:29.389 "ddgst": false 00:23:29.389 }, 00:23:29.389 "method": "bdev_nvme_attach_controller" 00:23:29.389 },{ 00:23:29.389 "params": { 00:23:29.390 "name": "Nvme5", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 },{ 00:23:29.390 "params": { 00:23:29.390 "name": "Nvme6", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 },{ 00:23:29.390 "params": { 00:23:29.390 "name": "Nvme7", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 },{ 00:23:29.390 "params": { 00:23:29.390 "name": "Nvme8", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 },{ 00:23:29.390 "params": { 00:23:29.390 "name": "Nvme9", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 },{ 00:23:29.390 "params": { 00:23:29.390 "name": "Nvme10", 00:23:29.390 "trtype": "tcp", 00:23:29.390 "traddr": "10.0.0.2", 00:23:29.390 "adrfam": "ipv4", 00:23:29.390 "trsvcid": "4420", 00:23:29.390 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.390 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.390 "hdgst": false, 00:23:29.390 "ddgst": false 00:23:29.390 }, 00:23:29.390 "method": "bdev_nvme_attach_controller" 00:23:29.390 }' 00:23:29.390 [2024-12-06 18:35:23.982548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.390 [2024-12-06 18:35:24.019274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.772 Running I/O for 10 seconds... 00:23:30.772 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.772 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:30.772 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.772 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.772 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:31.032 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:31.291 18:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=133 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 133 -ge 100 ']' 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2202756 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2202756 ']' 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2202756 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2202756 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2202756' 00:23:31.555 killing process with pid 2202756 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2202756 00:23:31.555 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2202756 00:23:31.555 [2024-12-06 18:35:26.329279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.555 [2024-12-06 18:35:26.329435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.329632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f290 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.330995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.556 [2024-12-06 18:35:26.331161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.331255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f760 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.332495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188fc50 is same with the state(6) to be set 00:23:31.557 [2024-12-06 18:35:26.333103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.333423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890120 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.558 [2024-12-06 18:35:26.334174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.559 [2024-12-06 18:35:26.334387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18905f0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.834 [2024-12-06 18:35:26.335550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.335751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1890fb0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.336316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.346188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38bd0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.346317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.835 [2024-12-06 18:35:26.346379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.835 [2024-12-06 18:35:26.346387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a7ba0 is same with the state(6) to be set 00:23:31.835 [2024-12-06 18:35:26.346420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc39960 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52610 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e6c0 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a8d0 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36c90 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a460 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 18:35:26.346962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:31.836 the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.346989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.346993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.346998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891480 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.347001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.347009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.836 [2024-12-06 18:35:26.347025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108bc70 is same with the state(6) to be set 00:23:31.836 [2024-12-06 18:35:26.347127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.836 [2024-12-06 18:35:26.347138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.836 [2024-12-06 18:35:26.347163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.836 [2024-12-06 18:35:26.347180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.836 [2024-12-06 18:35:26.347197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.836 [2024-12-06 18:35:26.347214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.836 [2024-12-06 18:35:26.347224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.837 [2024-12-06 18:35:26.347893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.837 [2024-12-06 18:35:26.347902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.347919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.347937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.347953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.347970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.347988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.347995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.348993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.838 [2024-12-06 18:35:26.349299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.838 [2024-12-06 18:35:26.349307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.349574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.839 [2024-12-06 18:35:26.357580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.839 [2024-12-06 18:35:26.357588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.357792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.357802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b000 is same with the state(6) to be set 00:23:31.840 [2024-12-06 18:35:26.357915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38bd0 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.357939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a7ba0 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.357980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.840 [2024-12-06 18:35:26.357991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.358000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.840 [2024-12-06 18:35:26.358009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.358018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.840 [2024-12-06 18:35:26.358026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.358034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:31.840 [2024-12-06 18:35:26.358042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.358049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1094fd0 is same with the state(6) to be set 00:23:31.840 [2024-12-06 18:35:26.358068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc39960 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52610 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e6c0 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3a8d0 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36c90 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3a460 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.358172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108bc70 (9): Bad file descriptor 00:23:31.840 [2024-12-06 18:35:26.359746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.359983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.359992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.840 [2024-12-06 18:35:26.360095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.840 [2024-12-06 18:35:26.360102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.841 [2024-12-06 18:35:26.360809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.841 [2024-12-06 18:35:26.360819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.360838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.360854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.360872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.360890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.360908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.360916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.362240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:31.842 [2024-12-06 18:35:26.364385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:31.842 [2024-12-06 18:35:26.364897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.842 [2024-12-06 18:35:26.364939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3a8d0 with addr=10.0.0.2, port=4420 00:23:31.842 [2024-12-06 18:35:26.364952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a8d0 is same with the state(6) to be set 00:23:31.842 [2024-12-06 18:35:26.365633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.365816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.365825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10416c0 is same with the state(6) to be set 00:23:31.842 [2024-12-06 18:35:26.366138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:31.842 [2024-12-06 18:35:26.366494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.842 [2024-12-06 18:35:26.366512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108bc70 with addr=10.0.0.2, port=4420 00:23:31.842 [2024-12-06 18:35:26.366526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108bc70 is same with the state(6) to be set 00:23:31.842 [2024-12-06 18:35:26.366538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3a8d0 (9): Bad file descriptor 00:23:31.842 [2024-12-06 18:35:26.366593] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.842 [2024-12-06 18:35:26.366634] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.842 [2024-12-06 18:35:26.366677] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.842 [2024-12-06 18:35:26.366721] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.842 [2024-12-06 18:35:26.367016] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:31.842 [2024-12-06 18:35:26.367970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:31.842 [2024-12-06 18:35:26.367992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1094fd0 (9): Bad file descriptor 00:23:31.842 [2024-12-06 18:35:26.368312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.842 [2024-12-06 18:35:26.368327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb52610 with addr=10.0.0.2, port=4420 00:23:31.842 [2024-12-06 18:35:26.368336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52610 is same with the state(6) to be set 00:23:31.842 [2024-12-06 18:35:26.368347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108bc70 (9): Bad file descriptor 00:23:31.842 [2024-12-06 18:35:26.368357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:31.842 [2024-12-06 18:35:26.368365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:31.842 [2024-12-06 18:35:26.368374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:31.842 [2024-12-06 18:35:26.368383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:31.842 [2024-12-06 18:35:26.368503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.842 [2024-12-06 18:35:26.368818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.842 [2024-12-06 18:35:26.368832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.368985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.368994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.843 [2024-12-06 18:35:26.369585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.843 [2024-12-06 18:35:26.369593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.369915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.369924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.370819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103cbb0 is same with the state(6) to be set 00:23:31.844 [2024-12-06 18:35:26.370867] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:31.844 [2024-12-06 18:35:26.370967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52610 (9): Bad file descriptor 00:23:31.844 [2024-12-06 18:35:26.370982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:31.844 [2024-12-06 18:35:26.370991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:31.844 [2024-12-06 18:35:26.371001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:31.844 [2024-12-06 18:35:26.371011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:31.844 [2024-12-06 18:35:26.371060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.844 [2024-12-06 18:35:26.371428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.844 [2024-12-06 18:35:26.371438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.371986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.371994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.845 [2024-12-06 18:35:26.372148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.845 [2024-12-06 18:35:26.372155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.372165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.372173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.372182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.372190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.372199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.372207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.372217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.372225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.372235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3f830 is same with the state(6) to be set 00:23:31.846 [2024-12-06 18:35:26.373507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.373981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.373992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.846 [2024-12-06 18:35:26.374161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.846 [2024-12-06 18:35:26.374172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.374688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.374697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe40900 is same with the state(6) to be set 00:23:31.847 [2024-12-06 18:35:26.375978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.375992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.847 [2024-12-06 18:35:26.376175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.847 [2024-12-06 18:35:26.376188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.848 [2024-12-06 18:35:26.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.848 [2024-12-06 18:35:26.376825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.376988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.376996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.377159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.377167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103ba20 is same with the state(6) to be set 00:23:31.849 [2024-12-06 18:35:26.378447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.849 [2024-12-06 18:35:26.378821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.849 [2024-12-06 18:35:26.378831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.378985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.378995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.850 [2024-12-06 18:35:26.379529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.850 [2024-12-06 18:35:26.379537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.379547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.379554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.379564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.379572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.379582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.379590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.379599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.379607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.379615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103de50 is same with the state(6) to be set 00:23:31.851 [2024-12-06 18:35:26.380910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.380925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.380939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.380949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.380961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.380971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.380981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.380998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.851 [2024-12-06 18:35:26.381544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.851 [2024-12-06 18:35:26.381552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.381992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.382009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.382027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.382046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.382063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:31.852 [2024-12-06 18:35:26.382081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:31.852 [2024-12-06 18:35:26.382090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1040400 is same with the state(6) to be set 00:23:31.852 [2024-12-06 18:35:26.383874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:31.852 [2024-12-06 18:35:26.383904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:31.852 [2024-12-06 18:35:26.383916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:31.852 [2024-12-06 18:35:26.383927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:31.852 [2024-12-06 18:35:26.384309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.852 [2024-12-06 18:35:26.384326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1094fd0 with addr=10.0.0.2, port=4420 00:23:31.852 [2024-12-06 18:35:26.384334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1094fd0 is same with the state(6) to be set 00:23:31.852 [2024-12-06 18:35:26.384343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:31.852 [2024-12-06 18:35:26.384349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:31.852 [2024-12-06 18:35:26.384358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:31.852 [2024-12-06 18:35:26.384366] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:31.852 [2024-12-06 18:35:26.384413] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:31.852 [2024-12-06 18:35:26.384428] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:31.853 [2024-12-06 18:35:26.384443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1094fd0 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.384776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:31.853 task offset: 30720 on job bdev=Nvme1n1 fails 00:23:31.853 00:23:31.853 Latency(us) 00:23:31.853 [2024-12-06T17:35:26.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.853 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme1n1 : 0.96 199.64 12.48 66.55 0.00 237702.61 20534.61 223696.21 00:23:31.853 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme2n1 ended in about 0.98 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme2n1 : 0.98 131.18 8.20 65.59 0.00 315246.65 16384.00 269134.51 00:23:31.853 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme3n1 ended in about 0.98 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme3n1 : 0.98 196.28 12.27 65.43 0.00 232119.89 34734.08 221948.59 00:23:31.853 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme4n1 : 0.98 199.86 12.49 65.26 0.00 224391.27 5024.43 249910.61 00:23:31.853 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme5n1 : 0.97 197.30 12.33 65.77 0.00 221004.37 36918.61 206219.95 00:23:31.853 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme6n1 : 0.98 130.20 8.14 65.10 0.00 291867.88 16165.55 246415.36 00:23:31.853 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme7n1 ended in about 0.97 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme7n1 : 0.97 198.75 12.42 66.25 0.00 209719.25 13926.40 244667.73 00:23:31.853 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme8n1 ended in about 0.99 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme8n1 : 0.99 194.81 12.18 64.94 0.00 209733.76 20097.71 244667.73 00:23:31.853 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme9n1 ended in about 0.97 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme9n1 : 0.97 197.88 12.37 9.28 0.00 255323.58 16711.68 256901.12 00:23:31.853 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.853 Job: Nvme10n1 ended in about 0.96 seconds with error 00:23:31.853 Verification LBA range: start 0x0 length 0x400 00:23:31.853 Nvme10n1 : 0.96 199.06 12.44 66.35 0.00 194810.03 15182.51 258648.75 00:23:31.853 [2024-12-06T17:35:26.637Z] =================================================================================================================== 00:23:31.853 [2024-12-06T17:35:26.637Z] Total : 1844.96 115.31 600.50 0.00 235334.22 5024.43 269134.51 00:23:31.853 [2024-12-06 18:35:26.410860] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:31.853 [2024-12-06 18:35:26.410918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:31.853 [2024-12-06 18:35:26.411260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.411282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3a460 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.411293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a460 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.411470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.411482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc36c90 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.411489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc36c90 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.411910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.411951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc38bd0 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.411962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc38bd0 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.412136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.412149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc39960 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.412157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc39960 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.413541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:31.853 [2024-12-06 18:35:26.413559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:31.853 [2024-12-06 18:35:26.413568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:31.853 [2024-12-06 18:35:26.413931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.413946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x105e6c0 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.413953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105e6c0 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.414130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a7ba0 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.414149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a7ba0 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.414161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3a460 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.414173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc36c90 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.414183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc38bd0 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.414193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc39960 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.414203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:31.853 [2024-12-06 18:35:26.414211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:31.853 [2024-12-06 18:35:26.414219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:31.853 [2024-12-06 18:35:26.414228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:31.853 [2024-12-06 18:35:26.414273] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:31.853 [2024-12-06 18:35:26.414287] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:31.853 [2024-12-06 18:35:26.414299] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:31.853 [2024-12-06 18:35:26.414312] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:31.853 [2024-12-06 18:35:26.414558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.414573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc3a8d0 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.414582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a8d0 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.414752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.414765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x108bc70 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.414773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x108bc70 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.415077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.853 [2024-12-06 18:35:26.415088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb52610 with addr=10.0.0.2, port=4420 00:23:31.853 [2024-12-06 18:35:26.415095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52610 is same with the state(6) to be set 00:23:31.853 [2024-12-06 18:35:26.415105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105e6c0 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.415114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a7ba0 (9): Bad file descriptor 00:23:31.853 [2024-12-06 18:35:26.415123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:31.853 [2024-12-06 18:35:26.415129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:31.853 [2024-12-06 18:35:26.415137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:31.853 [2024-12-06 18:35:26.415144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:31.853 [2024-12-06 18:35:26.415151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:31.853 [2024-12-06 18:35:26.415158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:31.853 [2024-12-06 18:35:26.415165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:31.853 [2024-12-06 18:35:26.415172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:31.853 [2024-12-06 18:35:26.415181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:31.853 [2024-12-06 18:35:26.415187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:31.853 [2024-12-06 18:35:26.415195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415201] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:31.854 [2024-12-06 18:35:26.415315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3a8d0 (9): Bad file descriptor 00:23:31.854 [2024-12-06 18:35:26.415326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x108bc70 (9): Bad file descriptor 00:23:31.854 [2024-12-06 18:35:26.415335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb52610 (9): Bad file descriptor 00:23:31.854 [2024-12-06 18:35:26.415344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.854 [2024-12-06 18:35:26.415726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1094fd0 with addr=10.0.0.2, port=4420 00:23:31.854 [2024-12-06 18:35:26.415734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1094fd0 is same with the state(6) to be set 00:23:31.854 [2024-12-06 18:35:26.415742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:31.854 [2024-12-06 18:35:26.415858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1094fd0 (9): Bad file descriptor 00:23:31.854 [2024-12-06 18:35:26.415888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:31.854 [2024-12-06 18:35:26.415896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:31.854 [2024-12-06 18:35:26.415904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:31.854 [2024-12-06 18:35:26.415910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:31.854 18:35:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2202906 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2202906 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2202906 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.240 rmmod nvme_tcp 00:23:33.240 rmmod nvme_fabrics 00:23:33.240 rmmod nvme_keyring 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2202756 ']' 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2202756 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2202756 ']' 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2202756 00:23:33.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2202756) - No such process 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2202756 is not found' 00:23:33.240 Process with pid 2202756 is not found 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.240 18:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.155 00:23:35.155 real 0m7.049s 00:23:35.155 user 0m16.192s 00:23:35.155 sys 0m1.185s 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.155 ************************************ 00:23:35.155 END TEST nvmf_shutdown_tc3 00:23:35.155 ************************************ 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:35.155 ************************************ 00:23:35.155 START TEST nvmf_shutdown_tc4 00:23:35.155 ************************************ 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:35.155 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.156 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.156 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.156 18:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:23:35.419 00:23:35.419 --- 10.0.0.2 ping statistics --- 00:23:35.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.419 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:35.419 00:23:35.419 --- 10.0.0.1 ping statistics --- 00:23:35.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.419 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.419 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2204326 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2204326 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2204326 ']' 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.681 18:35:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:35.681 [2024-12-06 18:35:30.287524] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:35.681 [2024-12-06 18:35:30.287595] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.681 [2024-12-06 18:35:30.380975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.681 [2024-12-06 18:35:30.412716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.681 [2024-12-06 18:35:30.412745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.681 [2024-12-06 18:35:30.412751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.681 [2024-12-06 18:35:30.412756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.681 [2024-12-06 18:35:30.412760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.681 [2024-12-06 18:35:30.414047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.681 [2024-12-06 18:35:30.414199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.681 [2024-12-06 18:35:30.414313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.681 [2024-12-06 18:35:30.414314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:36.622 [2024-12-06 18:35:31.128518] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.622 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.623 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:36.623 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.623 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.623 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:36.623 Malloc1 00:23:36.623 [2024-12-06 18:35:31.235447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.623 Malloc2 00:23:36.623 Malloc3 00:23:36.623 Malloc4 00:23:36.623 Malloc5 00:23:36.623 Malloc6 00:23:36.883 Malloc7 00:23:36.883 Malloc8 00:23:36.883 Malloc9 00:23:36.883 Malloc10 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2204551 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:36.883 18:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:37.144 [2024-12-06 18:35:31.713355] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2204326 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2204326 ']' 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2204326 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2204326 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2204326' 00:23:42.433 killing process with pid 2204326 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2204326 00:23:42.433 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2204326 00:23:42.433 [2024-12-06 18:35:36.708254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d070 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0d540 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.708810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0da10 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0cba0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0bd30 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.709590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0bd30 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0c6d0 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0b860 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.710477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0b860 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.433 [2024-12-06 18:35:36.711245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0e880 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 starting I/O failed: -6 00:23:42.434 [2024-12-06 18:35:36.711646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0ed50 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 [2024-12-06 18:35:36.711769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.711774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d0dee0 is same with the state(6) to be set 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 [2024-12-06 18:35:36.712318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf96a0 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.712332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf96a0 is same with the state(6) to be set 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 [2024-12-06 18:35:36.712653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.434 NVMe io qpair process completion error 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.434 starting I/O failed: -6 00:23:42.434 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 [2024-12-06 18:35:36.713846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 [2024-12-06 18:35:36.714635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.435 starting I/O failed: -6 00:23:42.435 Write completed with error (sct=0, sc=8) 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 [2024-12-06 18:35:36.715537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 Write completed with error (sct=0, sc=8) 00:23:42.436 starting I/O failed: -6 00:23:42.436 [2024-12-06 18:35:36.716942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.437 NVMe io qpair process completion error 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 [2024-12-06 18:35:36.718071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 [2024-12-06 18:35:36.718878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 starting I/O failed: -6 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.437 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 [2024-12-06 18:35:36.719804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.438 starting I/O failed: -6 00:23:42.438 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 [2024-12-06 18:35:36.721253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.439 NVMe io qpair process completion error 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 [2024-12-06 18:35:36.722245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.439 Write completed with error (sct=0, sc=8) 00:23:42.439 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 [2024-12-06 18:35:36.723051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 [2024-12-06 18:35:36.723983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.440 Write completed with error (sct=0, sc=8) 00:23:42.440 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 [2024-12-06 18:35:36.727089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.441 NVMe io qpair process completion error 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 [2024-12-06 18:35:36.728257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.441 starting I/O failed: -6 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 starting I/O failed: -6 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.441 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 [2024-12-06 18:35:36.729191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 [2024-12-06 18:35:36.730166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.442 Write completed with error (sct=0, sc=8) 00:23:42.442 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 [2024-12-06 18:35:36.731821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.443 NVMe io qpair process completion error 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 [2024-12-06 18:35:36.733139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.443 starting I/O failed: -6 00:23:42.443 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 [2024-12-06 18:35:36.734091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 [2024-12-06 18:35:36.735019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.444 Write completed with error (sct=0, sc=8) 00:23:42.444 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 [2024-12-06 18:35:36.738084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.445 NVMe io qpair process completion error 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 Write completed with error (sct=0, sc=8) 00:23:42.445 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 [2024-12-06 18:35:36.739519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 [2024-12-06 18:35:36.740350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.446 Write completed with error (sct=0, sc=8) 00:23:42.446 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 [2024-12-06 18:35:36.741272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 [2024-12-06 18:35:36.742888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.447 NVMe io qpair process completion error 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 starting I/O failed: -6 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.447 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 [2024-12-06 18:35:36.744090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 [2024-12-06 18:35:36.744924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.448 starting I/O failed: -6 00:23:42.448 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 [2024-12-06 18:35:36.745853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.449 Write completed with error (sct=0, sc=8) 00:23:42.449 starting I/O failed: -6 00:23:42.450 [2024-12-06 18:35:36.747287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.450 NVMe io qpair process completion error 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 [2024-12-06 18:35:36.748385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.450 starting I/O failed: -6 00:23:42.450 starting I/O failed: -6 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 [2024-12-06 18:35:36.749323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 starting I/O failed: -6 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.450 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 [2024-12-06 18:35:36.750241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.451 starting I/O failed: -6 00:23:42.451 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 [2024-12-06 18:35:36.753716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:42.452 NVMe io qpair process completion error 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.452 starting I/O failed: -6 00:23:42.452 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.453 starting I/O failed: -6 00:23:42.453 Write completed with error (sct=0, sc=8) 00:23:42.454 starting I/O failed: -6 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 starting I/O failed: -6 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 starting I/O failed: -6 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 starting I/O failed: -6 00:23:42.454 [2024-12-06 18:35:36.757740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.454 NVMe io qpair process completion error 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 [2024-12-06 18:35:36.758606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 Write completed with error (sct=0, sc=8) 00:23:42.454 [2024-12-06 18:35:36.759961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.454 NVMe io qpair process completion error 00:23:42.454 Initializing NVMe Controllers 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:42.454 Controller IO queue size 128, less than required. 00:23:42.454 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:42.455 Initialization complete. Launching workers. 00:23:42.455 ======================================================== 00:23:42.455 Latency(us) 00:23:42.455 Device Information : IOPS MiB/s Average min max 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1926.09 82.76 66471.63 792.46 122435.42 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1907.46 81.96 67137.75 809.98 121826.17 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1926.52 82.78 66491.19 806.95 116464.36 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1905.51 81.88 67267.53 550.28 119578.40 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1891.65 81.28 67785.95 593.36 118231.26 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1949.92 83.79 65805.88 685.26 126367.09 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1924.57 82.70 66691.95 866.24 119320.81 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1944.72 83.56 66017.96 703.19 118088.96 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1917.43 82.39 66996.36 717.00 118395.39 00:23:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1911.58 82.14 67060.72 523.46 118094.21 00:23:42.455 ======================================================== 00:23:42.455 Total : 19205.44 825.23 66767.74 523.46 126367.09 00:23:42.455 00:23:42.455 [2024-12-06 18:35:36.765518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181720 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180410 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180740 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fef0 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2180a70 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f890 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217fbc0 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181ae0 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2181900 is same with the state(6) to be set 00:23:42.455 [2024-12-06 18:35:36.765795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217f560 is same with the state(6) to be set 00:23:42.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:42.455 18:35:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2204551 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2204551 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2204551 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.394 18:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.394 rmmod nvme_tcp 00:23:43.394 rmmod nvme_fabrics 00:23:43.394 rmmod nvme_keyring 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2204326 ']' 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2204326 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2204326 ']' 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2204326 00:23:43.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2204326) - No such process 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2204326 is not found' 00:23:43.394 Process with pid 2204326 is not found 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.394 18:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.941 00:23:45.941 real 0m10.270s 00:23:45.941 user 0m27.911s 00:23:45.941 sys 0m4.041s 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:45.941 ************************************ 00:23:45.941 END TEST nvmf_shutdown_tc4 00:23:45.941 ************************************ 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:45.941 00:23:45.941 real 0m43.181s 00:23:45.941 user 1m44.490s 00:23:45.941 sys 0m13.920s 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:45.941 ************************************ 00:23:45.941 END TEST nvmf_shutdown 00:23:45.941 ************************************ 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:45.941 ************************************ 00:23:45.941 START TEST nvmf_nsid 00:23:45.941 ************************************ 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:45.941 * Looking for test storage... 00:23:45.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:45.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.941 --rc genhtml_branch_coverage=1 00:23:45.941 --rc genhtml_function_coverage=1 00:23:45.941 --rc genhtml_legend=1 00:23:45.941 --rc geninfo_all_blocks=1 00:23:45.941 --rc geninfo_unexecuted_blocks=1 00:23:45.941 00:23:45.941 ' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:45.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.941 --rc genhtml_branch_coverage=1 00:23:45.941 --rc genhtml_function_coverage=1 00:23:45.941 --rc genhtml_legend=1 00:23:45.941 --rc geninfo_all_blocks=1 00:23:45.941 --rc geninfo_unexecuted_blocks=1 00:23:45.941 00:23:45.941 ' 00:23:45.941 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:45.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.942 --rc genhtml_branch_coverage=1 00:23:45.942 --rc genhtml_function_coverage=1 00:23:45.942 --rc genhtml_legend=1 00:23:45.942 --rc geninfo_all_blocks=1 00:23:45.942 --rc geninfo_unexecuted_blocks=1 00:23:45.942 00:23:45.942 ' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:45.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.942 --rc genhtml_branch_coverage=1 00:23:45.942 --rc genhtml_function_coverage=1 00:23:45.942 --rc genhtml_legend=1 00:23:45.942 --rc geninfo_all_blocks=1 00:23:45.942 --rc geninfo_unexecuted_blocks=1 00:23:45.942 00:23:45.942 ' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.942 18:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:54.101 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:54.102 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.102 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:54.103 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.103 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:54.104 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:54.104 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:54.104 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:54.108 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.109 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:23:54.109 00:23:54.109 --- 10.0.0.2 ping statistics --- 00:23:54.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.110 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:54.110 00:23:54.110 --- 10.0.0.1 ping statistics --- 00:23:54.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.110 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.110 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2210036 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2210036 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2210036 ']' 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.111 18:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 [2024-12-06 18:35:48.041309] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:54.111 [2024-12-06 18:35:48.041374] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.111 [2024-12-06 18:35:48.142948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.111 [2024-12-06 18:35:48.193898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.111 [2024-12-06 18:35:48.193949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.111 [2024-12-06 18:35:48.193958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.111 [2024-12-06 18:35:48.193965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.111 [2024-12-06 18:35:48.193971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.111 [2024-12-06 18:35:48.194720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.111 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.111 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:54.112 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.112 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.112 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2210133 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c33ac626-64f7-4a91-9e91-2a2c4ecbb2e9 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=88f8547f-b48e-4820-8838-fd8ba54acadb 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9b56eff8-75b4-4b6b-aa81-84f19745beb3 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.376 18:35:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.376 null0 00:23:54.376 null1 00:23:54.376 null2 00:23:54.376 [2024-12-06 18:35:48.978267] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.376 [2024-12-06 18:35:48.978279] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:23:54.376 [2024-12-06 18:35:48.978342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210133 ] 00:23:54.376 [2024-12-06 18:35:49.002564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2210133 /var/tmp/tgt2.sock 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2210133 ']' 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:54.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.376 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:54.376 [2024-12-06 18:35:49.070734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.376 [2024-12-06 18:35:49.123679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.637 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.637 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:23:54.637 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:55.209 [2024-12-06 18:35:49.686794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.209 [2024-12-06 18:35:49.702973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:55.209 nvme0n1 nvme0n2 00:23:55.209 nvme1n1 00:23:55.209 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:55.209 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:55.209 18:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.596 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:23:56.597 18:35:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c33ac626-64f7-4a91-9e91-2a2c4ecbb2e9 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c33ac62664f74a919e912a2c4ecbb2e9 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C33AC62664F74A919E912A2C4ECBB2E9 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C33AC62664F74A919E912A2C4ECBB2E9 == \C\3\3\A\C\6\2\6\6\4\F\7\4\A\9\1\9\E\9\1\2\A\2\C\4\E\C\B\B\2\E\9 ]] 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 88f8547f-b48e-4820-8838-fd8ba54acadb 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:57.542 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=88f8547fb48e48208838fd8ba54acadb 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 88F8547FB48E48208838FD8BA54ACADB 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 88F8547FB48E48208838FD8BA54ACADB == \8\8\F\8\5\4\7\F\B\4\8\E\4\8\2\0\8\8\3\8\F\D\8\B\A\5\4\A\C\A\D\B ]] 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9b56eff8-75b4-4b6b-aa81-84f19745beb3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b56eff875b44b6baa8184f19745beb3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B56EFF875B44B6BAA8184F19745BEB3 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9B56EFF875B44B6BAA8184F19745BEB3 == \9\B\5\6\E\F\F\8\7\5\B\4\4\B\6\B\A\A\8\1\8\4\F\1\9\7\4\5\B\E\B\3 ]] 00:23:57.804 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2210133 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2210133 ']' 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2210133 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2210133 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2210133' 00:23:58.065 killing process with pid 2210133 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2210133 00:23:58.065 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2210133 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.327 rmmod nvme_tcp 00:23:58.327 rmmod nvme_fabrics 00:23:58.327 rmmod nvme_keyring 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2210036 ']' 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2210036 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2210036 ']' 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2210036 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.327 18:35:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2210036 00:23:58.327 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.327 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.327 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2210036' 00:23:58.327 killing process with pid 2210036 00:23:58.327 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2210036 00:23:58.327 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2210036 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.589 18:35:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.509 00:24:00.509 real 0m14.970s 00:24:00.509 user 0m11.457s 00:24:00.509 sys 0m6.877s 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:00.509 ************************************ 00:24:00.509 END TEST nvmf_nsid 00:24:00.509 ************************************ 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:00.509 00:24:00.509 real 13m3.558s 00:24:00.509 user 27m16.825s 00:24:00.509 sys 3m57.411s 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.509 18:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.509 ************************************ 00:24:00.509 END TEST nvmf_target_extra 00:24:00.509 ************************************ 00:24:00.771 18:35:55 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:00.771 18:35:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.771 18:35:55 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.771 18:35:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.771 ************************************ 00:24:00.771 START TEST nvmf_host 00:24:00.771 ************************************ 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:00.771 * Looking for test storage... 00:24:00.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.771 --rc genhtml_branch_coverage=1 00:24:00.771 --rc genhtml_function_coverage=1 00:24:00.771 --rc genhtml_legend=1 00:24:00.771 --rc geninfo_all_blocks=1 00:24:00.771 --rc geninfo_unexecuted_blocks=1 00:24:00.771 00:24:00.771 ' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.771 --rc genhtml_branch_coverage=1 00:24:00.771 --rc genhtml_function_coverage=1 00:24:00.771 --rc genhtml_legend=1 00:24:00.771 --rc geninfo_all_blocks=1 00:24:00.771 --rc geninfo_unexecuted_blocks=1 00:24:00.771 00:24:00.771 ' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.771 --rc genhtml_branch_coverage=1 00:24:00.771 --rc genhtml_function_coverage=1 00:24:00.771 --rc genhtml_legend=1 00:24:00.771 --rc geninfo_all_blocks=1 00:24:00.771 --rc geninfo_unexecuted_blocks=1 00:24:00.771 00:24:00.771 ' 00:24:00.771 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.772 --rc genhtml_branch_coverage=1 00:24:00.772 --rc genhtml_function_coverage=1 00:24:00.772 --rc genhtml_legend=1 00:24:00.772 --rc geninfo_all_blocks=1 00:24:00.772 --rc geninfo_unexecuted_blocks=1 00:24:00.772 00:24:00.772 ' 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.772 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.033 ************************************ 00:24:01.033 START TEST nvmf_multicontroller 00:24:01.033 ************************************ 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:01.033 * Looking for test storage... 00:24:01.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.033 --rc genhtml_branch_coverage=1 00:24:01.033 --rc genhtml_function_coverage=1 00:24:01.033 --rc genhtml_legend=1 00:24:01.033 --rc geninfo_all_blocks=1 00:24:01.033 --rc geninfo_unexecuted_blocks=1 00:24:01.033 00:24:01.033 ' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.033 --rc genhtml_branch_coverage=1 00:24:01.033 --rc genhtml_function_coverage=1 00:24:01.033 --rc genhtml_legend=1 00:24:01.033 --rc geninfo_all_blocks=1 00:24:01.033 --rc geninfo_unexecuted_blocks=1 00:24:01.033 00:24:01.033 ' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.033 --rc genhtml_branch_coverage=1 00:24:01.033 --rc genhtml_function_coverage=1 00:24:01.033 --rc genhtml_legend=1 00:24:01.033 --rc geninfo_all_blocks=1 00:24:01.033 --rc geninfo_unexecuted_blocks=1 00:24:01.033 00:24:01.033 ' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:01.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.033 --rc genhtml_branch_coverage=1 00:24:01.033 --rc genhtml_function_coverage=1 00:24:01.033 --rc genhtml_legend=1 00:24:01.033 --rc geninfo_all_blocks=1 00:24:01.033 --rc geninfo_unexecuted_blocks=1 00:24:01.033 00:24:01.033 ' 00:24:01.033 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.034 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.295 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.296 18:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:09.473 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:09.473 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.473 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:09.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:09.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.474 18:36:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:24:09.474 00:24:09.474 --- 10.0.0.2 ping statistics --- 00:24:09.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.474 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:24:09.474 00:24:09.474 --- 10.0.0.1 ping statistics --- 00:24:09.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.474 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2215336 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2215336 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2215336 ']' 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.474 18:36:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.474 [2024-12-06 18:36:03.375075] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:09.474 [2024-12-06 18:36:03.375138] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.474 [2024-12-06 18:36:03.476037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.474 [2024-12-06 18:36:03.528521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.474 [2024-12-06 18:36:03.528577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.474 [2024-12-06 18:36:03.528586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.474 [2024-12-06 18:36:03.528595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.474 [2024-12-06 18:36:03.528601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.475 [2024-12-06 18:36:03.530490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.475 [2024-12-06 18:36:03.530533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.475 [2024-12-06 18:36:03.530534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.475 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.475 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:09.475 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.475 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.475 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 [2024-12-06 18:36:04.260740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 Malloc0 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 [2024-12-06 18:36:04.338052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 [2024-12-06 18:36:04.349939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 Malloc1 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.766 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2215686 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2215686 /var/tmp/bdevperf.sock 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2215686 ']' 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.767 18:36:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 NVMe0n1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.850 1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 request: 00:24:10.850 { 00:24:10.850 "name": "NVMe0", 00:24:10.850 "trtype": "tcp", 00:24:10.850 "traddr": "10.0.0.2", 00:24:10.850 "adrfam": "ipv4", 00:24:10.850 "trsvcid": "4420", 00:24:10.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.850 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:10.850 "hostaddr": "10.0.0.1", 00:24:10.850 "prchk_reftag": false, 00:24:10.850 "prchk_guard": false, 00:24:10.850 "hdgst": false, 00:24:10.850 "ddgst": false, 00:24:10.850 "allow_unrecognized_csi": false, 00:24:10.850 "method": "bdev_nvme_attach_controller", 00:24:10.850 "req_id": 1 00:24:10.850 } 00:24:10.850 Got JSON-RPC error response 00:24:10.850 response: 00:24:10.850 { 00:24:10.850 "code": -114, 00:24:10.850 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:10.850 } 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.850 request: 00:24:10.850 { 00:24:10.850 "name": "NVMe0", 00:24:10.850 "trtype": "tcp", 00:24:10.850 "traddr": "10.0.0.2", 00:24:10.850 "adrfam": "ipv4", 00:24:10.850 "trsvcid": "4420", 00:24:10.850 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.850 "hostaddr": "10.0.0.1", 00:24:10.850 "prchk_reftag": false, 00:24:10.850 "prchk_guard": false, 00:24:10.850 "hdgst": false, 00:24:10.850 "ddgst": false, 00:24:10.850 "allow_unrecognized_csi": false, 00:24:10.850 "method": "bdev_nvme_attach_controller", 00:24:10.850 "req_id": 1 00:24:10.850 } 00:24:10.850 Got JSON-RPC error response 00:24:10.850 response: 00:24:10.850 { 00:24:10.850 "code": -114, 00:24:10.850 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:10.850 } 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.850 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.851 request: 00:24:10.851 { 00:24:10.851 "name": "NVMe0", 00:24:10.851 "trtype": "tcp", 00:24:10.851 "traddr": "10.0.0.2", 00:24:10.851 "adrfam": "ipv4", 00:24:10.851 "trsvcid": "4420", 00:24:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.851 "hostaddr": "10.0.0.1", 00:24:10.851 "prchk_reftag": false, 00:24:10.851 "prchk_guard": false, 00:24:10.851 "hdgst": false, 00:24:10.851 "ddgst": false, 00:24:10.851 "multipath": "disable", 00:24:10.851 "allow_unrecognized_csi": false, 00:24:10.851 "method": "bdev_nvme_attach_controller", 00:24:10.851 "req_id": 1 00:24:10.851 } 00:24:10.851 Got JSON-RPC error response 00:24:10.851 response: 00:24:10.851 { 00:24:10.851 "code": -114, 00:24:10.851 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:10.851 } 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:10.851 request: 00:24:10.851 { 00:24:10.851 "name": "NVMe0", 00:24:10.851 "trtype": "tcp", 00:24:10.851 "traddr": "10.0.0.2", 00:24:10.851 "adrfam": "ipv4", 00:24:10.851 "trsvcid": "4420", 00:24:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.851 "hostaddr": "10.0.0.1", 00:24:10.851 "prchk_reftag": false, 00:24:10.851 "prchk_guard": false, 00:24:10.851 "hdgst": false, 00:24:10.851 "ddgst": false, 00:24:10.851 "multipath": "failover", 00:24:10.851 "allow_unrecognized_csi": false, 00:24:10.851 "method": "bdev_nvme_attach_controller", 00:24:10.851 "req_id": 1 00:24:10.851 } 00:24:10.851 Got JSON-RPC error response 00:24:10.851 response: 00:24:10.851 { 00:24:10.851 "code": -114, 00:24:10.851 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:10.851 } 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.851 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.112 NVMe0n1 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.112 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:11.112 18:36:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:12.494 { 00:24:12.494 "results": [ 00:24:12.494 { 00:24:12.494 "job": "NVMe0n1", 00:24:12.494 "core_mask": "0x1", 00:24:12.494 "workload": "write", 00:24:12.494 "status": "finished", 00:24:12.494 "queue_depth": 128, 00:24:12.494 "io_size": 4096, 00:24:12.494 "runtime": 1.004286, 00:24:12.494 "iops": 26310.23433563746, 00:24:12.494 "mibps": 102.77435287358382, 00:24:12.494 "io_failed": 0, 00:24:12.494 "io_timeout": 0, 00:24:12.494 "avg_latency_us": 4857.495121674298, 00:24:12.494 "min_latency_us": 2375.68, 00:24:12.494 "max_latency_us": 11468.8 00:24:12.494 } 00:24:12.494 ], 00:24:12.494 "core_count": 1 00:24:12.494 } 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2215686 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2215686 ']' 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2215686 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.494 18:36:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215686 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215686' 00:24:12.494 killing process with pid 2215686 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2215686 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2215686 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:12.494 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:12.494 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:12.494 [2024-12-06 18:36:04.480033] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:12.494 [2024-12-06 18:36:04.480106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215686 ] 00:24:12.494 [2024-12-06 18:36:04.557813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.494 [2024-12-06 18:36:04.611700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.494 [2024-12-06 18:36:05.765410] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 1659a6cd-4487-43f5-84e8-e879536cb302 already exists 00:24:12.494 [2024-12-06 18:36:05.765459] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:1659a6cd-4487-43f5-84e8-e879536cb302 alias for bdev NVMe1n1 00:24:12.494 [2024-12-06 18:36:05.765469] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:12.494 Running I/O for 1 seconds... 00:24:12.494 26295.00 IOPS, 102.71 MiB/s 00:24:12.494 Latency(us) 00:24:12.494 [2024-12-06T17:36:07.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.494 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:12.494 NVMe0n1 : 1.00 26310.23 102.77 0.00 0.00 4857.50 2375.68 11468.80 00:24:12.494 [2024-12-06T17:36:07.278Z] =================================================================================================================== 00:24:12.494 [2024-12-06T17:36:07.278Z] Total : 26310.23 102.77 0.00 0.00 4857.50 2375.68 11468.80 00:24:12.494 Received shutdown signal, test time was about 1.000000 seconds 00:24:12.494 00:24:12.494 Latency(us) 00:24:12.494 [2024-12-06T17:36:07.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.495 [2024-12-06T17:36:07.279Z] =================================================================================================================== 00:24:12.495 [2024-12-06T17:36:07.279Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.495 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:12.495 rmmod nvme_tcp 00:24:12.495 rmmod nvme_fabrics 00:24:12.495 rmmod nvme_keyring 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2215336 ']' 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2215336 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2215336 ']' 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2215336 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:12.495 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215336 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215336' 00:24:12.754 killing process with pid 2215336 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2215336 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2215336 00:24:12.754 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.755 18:36:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.302 00:24:15.302 real 0m13.902s 00:24:15.302 user 0m16.868s 00:24:15.302 sys 0m6.490s 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.302 ************************************ 00:24:15.302 END TEST nvmf_multicontroller 00:24:15.302 ************************************ 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.302 ************************************ 00:24:15.302 START TEST nvmf_aer 00:24:15.302 ************************************ 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:15.302 * Looking for test storage... 00:24:15.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.302 --rc genhtml_branch_coverage=1 00:24:15.302 --rc genhtml_function_coverage=1 00:24:15.302 --rc genhtml_legend=1 00:24:15.302 --rc geninfo_all_blocks=1 00:24:15.302 --rc geninfo_unexecuted_blocks=1 00:24:15.302 00:24:15.302 ' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.302 --rc genhtml_branch_coverage=1 00:24:15.302 --rc genhtml_function_coverage=1 00:24:15.302 --rc genhtml_legend=1 00:24:15.302 --rc geninfo_all_blocks=1 00:24:15.302 --rc geninfo_unexecuted_blocks=1 00:24:15.302 00:24:15.302 ' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.302 --rc genhtml_branch_coverage=1 00:24:15.302 --rc genhtml_function_coverage=1 00:24:15.302 --rc genhtml_legend=1 00:24:15.302 --rc geninfo_all_blocks=1 00:24:15.302 --rc geninfo_unexecuted_blocks=1 00:24:15.302 00:24:15.302 ' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.302 --rc genhtml_branch_coverage=1 00:24:15.302 --rc genhtml_function_coverage=1 00:24:15.302 --rc genhtml_legend=1 00:24:15.302 --rc geninfo_all_blocks=1 00:24:15.302 --rc geninfo_unexecuted_blocks=1 00:24:15.302 00:24:15.302 ' 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.302 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:15.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:15.303 18:36:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:23.438 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:23.438 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.438 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:23.439 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:23.439 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:23.439 18:36:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:23.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:24:23.439 00:24:23.439 --- 10.0.0.2 ping statistics --- 00:24:23.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.439 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:23.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:24:23.439 00:24:23.439 --- 10.0.0.1 ping statistics --- 00:24:23.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.439 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2220844 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2220844 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2220844 ']' 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.439 18:36:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.439 [2024-12-06 18:36:17.379689] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:23.439 [2024-12-06 18:36:17.379758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.439 [2024-12-06 18:36:17.478619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.439 [2024-12-06 18:36:17.532269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.439 [2024-12-06 18:36:17.532322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.439 [2024-12-06 18:36:17.532331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.439 [2024-12-06 18:36:17.532339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.439 [2024-12-06 18:36:17.532345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.439 [2024-12-06 18:36:17.534348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.439 [2024-12-06 18:36:17.534509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.439 [2024-12-06 18:36:17.534687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.439 [2024-12-06 18:36:17.534746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.439 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.439 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:23.439 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.439 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.439 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 [2024-12-06 18:36:18.264271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 Malloc0 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 [2024-12-06 18:36:18.341647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.699 [ 00:24:23.699 { 00:24:23.699 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.699 "subtype": "Discovery", 00:24:23.699 "listen_addresses": [], 00:24:23.699 "allow_any_host": true, 00:24:23.699 "hosts": [] 00:24:23.699 }, 00:24:23.699 { 00:24:23.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.699 "subtype": "NVMe", 00:24:23.699 "listen_addresses": [ 00:24:23.699 { 00:24:23.699 "trtype": "TCP", 00:24:23.699 "adrfam": "IPv4", 00:24:23.699 "traddr": "10.0.0.2", 00:24:23.699 "trsvcid": "4420" 00:24:23.699 } 00:24:23.699 ], 00:24:23.699 "allow_any_host": true, 00:24:23.699 "hosts": [], 00:24:23.699 "serial_number": "SPDK00000000000001", 00:24:23.699 "model_number": "SPDK bdev Controller", 00:24:23.699 "max_namespaces": 2, 00:24:23.699 "min_cntlid": 1, 00:24:23.699 "max_cntlid": 65519, 00:24:23.699 "namespaces": [ 00:24:23.699 { 00:24:23.699 "nsid": 1, 00:24:23.699 "bdev_name": "Malloc0", 00:24:23.699 "name": "Malloc0", 00:24:23.699 "nguid": "022B53ED57244E458D226F395331EC40", 00:24:23.699 "uuid": "022b53ed-5724-4e45-8d22-6f395331ec40" 00:24:23.699 } 00:24:23.699 ] 00:24:23.699 } 00:24:23.699 ] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2221023 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:23.699 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 Malloc1 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 [ 00:24:23.959 { 00:24:23.959 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.959 "subtype": "Discovery", 00:24:23.959 "listen_addresses": [], 00:24:23.959 "allow_any_host": true, 00:24:23.959 "hosts": [] 00:24:23.959 }, 00:24:23.959 { 00:24:23.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.959 "subtype": "NVMe", 00:24:23.959 "listen_addresses": [ 00:24:23.959 { 00:24:23.959 "trtype": "TCP", 00:24:23.959 "adrfam": "IPv4", 00:24:23.959 "traddr": "10.0.0.2", 00:24:23.959 "trsvcid": "4420" 00:24:23.959 } 00:24:23.959 ], 00:24:23.959 "allow_any_host": true, 00:24:23.959 "hosts": [], 00:24:23.959 "serial_number": "SPDK00000000000001", 00:24:23.959 "model_number": "SPDK bdev Controller", 00:24:23.959 "max_namespaces": 2, 00:24:23.959 "min_cntlid": 1, 00:24:23.959 Asynchronous Event Request test 00:24:23.959 Attaching to 10.0.0.2 00:24:23.959 Attached to 10.0.0.2 00:24:23.959 Registering asynchronous event callbacks... 00:24:23.959 Starting namespace attribute notice tests for all controllers... 00:24:23.959 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:23.959 aer_cb - Changed Namespace 00:24:23.959 Cleaning up... 00:24:23.959 "max_cntlid": 65519, 00:24:23.959 "namespaces": [ 00:24:23.959 { 00:24:23.959 "nsid": 1, 00:24:23.959 "bdev_name": "Malloc0", 00:24:23.959 "name": "Malloc0", 00:24:23.959 "nguid": "022B53ED57244E458D226F395331EC40", 00:24:23.959 "uuid": "022b53ed-5724-4e45-8d22-6f395331ec40" 00:24:23.959 }, 00:24:23.959 { 00:24:23.959 "nsid": 2, 00:24:23.959 "bdev_name": "Malloc1", 00:24:23.959 "name": "Malloc1", 00:24:23.959 "nguid": "7D9FB0DB92674FC49A922EE67377A665", 00:24:23.959 "uuid": "7d9fb0db-9267-4fc4-9a92-2ee67377a665" 00:24:23.959 } 00:24:23.959 ] 00:24:23.959 } 00:24:23.959 ] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2221023 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.959 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.959 rmmod nvme_tcp 00:24:24.219 rmmod nvme_fabrics 00:24:24.219 rmmod nvme_keyring 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2220844 ']' 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2220844 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2220844 ']' 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2220844 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220844 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220844' 00:24:24.219 killing process with pid 2220844 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2220844 00:24:24.219 18:36:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2220844 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.479 18:36:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:26.389 00:24:26.389 real 0m11.529s 00:24:26.389 user 0m8.215s 00:24:26.389 sys 0m6.195s 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:26.389 ************************************ 00:24:26.389 END TEST nvmf_aer 00:24:26.389 ************************************ 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:26.389 18:36:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.651 ************************************ 00:24:26.651 START TEST nvmf_async_init 00:24:26.651 ************************************ 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:26.651 * Looking for test storage... 00:24:26.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.651 --rc genhtml_branch_coverage=1 00:24:26.651 --rc genhtml_function_coverage=1 00:24:26.651 --rc genhtml_legend=1 00:24:26.651 --rc geninfo_all_blocks=1 00:24:26.651 --rc geninfo_unexecuted_blocks=1 00:24:26.651 00:24:26.651 ' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.651 --rc genhtml_branch_coverage=1 00:24:26.651 --rc genhtml_function_coverage=1 00:24:26.651 --rc genhtml_legend=1 00:24:26.651 --rc geninfo_all_blocks=1 00:24:26.651 --rc geninfo_unexecuted_blocks=1 00:24:26.651 00:24:26.651 ' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.651 --rc genhtml_branch_coverage=1 00:24:26.651 --rc genhtml_function_coverage=1 00:24:26.651 --rc genhtml_legend=1 00:24:26.651 --rc geninfo_all_blocks=1 00:24:26.651 --rc geninfo_unexecuted_blocks=1 00:24:26.651 00:24:26.651 ' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:26.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.651 --rc genhtml_branch_coverage=1 00:24:26.651 --rc genhtml_function_coverage=1 00:24:26.651 --rc genhtml_legend=1 00:24:26.651 --rc geninfo_all_blocks=1 00:24:26.651 --rc geninfo_unexecuted_blocks=1 00:24:26.651 00:24:26.651 ' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.651 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ef8fd572a0ea402ab092eeabcf0a1cd5 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.913 18:36:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:35.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:35.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:35.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:35.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:24:35.056 00:24:35.056 --- 10.0.0.2 ping statistics --- 00:24:35.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.056 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:24:35.056 00:24:35.056 --- 10.0.0.1 ping statistics --- 00:24:35.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.056 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2225229 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2225229 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2225229 ']' 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.056 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.057 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.057 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.057 18:36:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.057 [2024-12-06 18:36:29.034597] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:35.057 [2024-12-06 18:36:29.034684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.057 [2024-12-06 18:36:29.140169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.057 [2024-12-06 18:36:29.191028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.057 [2024-12-06 18:36:29.191082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.057 [2024-12-06 18:36:29.191091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.057 [2024-12-06 18:36:29.191098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.057 [2024-12-06 18:36:29.191105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.057 [2024-12-06 18:36:29.191869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 [2024-12-06 18:36:29.903521] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 null0 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ef8fd572a0ea402ab092eeabcf0a1cd5 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 [2024-12-06 18:36:29.963852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 18:36:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.581 nvme0n1 00:24:35.581 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.581 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:35.581 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.581 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.581 [ 00:24:35.581 { 00:24:35.581 "name": "nvme0n1", 00:24:35.581 "aliases": [ 00:24:35.581 "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5" 00:24:35.581 ], 00:24:35.581 "product_name": "NVMe disk", 00:24:35.581 "block_size": 512, 00:24:35.581 "num_blocks": 2097152, 00:24:35.581 "uuid": "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5", 00:24:35.581 "numa_id": 0, 00:24:35.581 "assigned_rate_limits": { 00:24:35.581 "rw_ios_per_sec": 0, 00:24:35.581 "rw_mbytes_per_sec": 0, 00:24:35.581 "r_mbytes_per_sec": 0, 00:24:35.581 "w_mbytes_per_sec": 0 00:24:35.581 }, 00:24:35.581 "claimed": false, 00:24:35.581 "zoned": false, 00:24:35.581 "supported_io_types": { 00:24:35.581 "read": true, 00:24:35.581 "write": true, 00:24:35.581 "unmap": false, 00:24:35.581 "flush": true, 00:24:35.581 "reset": true, 00:24:35.581 "nvme_admin": true, 00:24:35.581 "nvme_io": true, 00:24:35.581 "nvme_io_md": false, 00:24:35.581 "write_zeroes": true, 00:24:35.581 "zcopy": false, 00:24:35.581 "get_zone_info": false, 00:24:35.581 "zone_management": false, 00:24:35.581 "zone_append": false, 00:24:35.581 "compare": true, 00:24:35.581 "compare_and_write": true, 00:24:35.581 "abort": true, 00:24:35.581 "seek_hole": false, 00:24:35.581 "seek_data": false, 00:24:35.581 "copy": true, 00:24:35.582 "nvme_iov_md": false 00:24:35.582 }, 00:24:35.582 "memory_domains": [ 00:24:35.582 { 00:24:35.582 "dma_device_id": "system", 00:24:35.582 "dma_device_type": 1 00:24:35.582 } 00:24:35.582 ], 00:24:35.582 "driver_specific": { 00:24:35.582 "nvme": [ 00:24:35.582 { 00:24:35.582 "trid": { 00:24:35.582 "trtype": "TCP", 00:24:35.582 "adrfam": "IPv4", 00:24:35.582 "traddr": "10.0.0.2", 00:24:35.582 "trsvcid": "4420", 00:24:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:35.582 }, 00:24:35.582 "ctrlr_data": { 00:24:35.582 "cntlid": 1, 00:24:35.582 "vendor_id": "0x8086", 00:24:35.582 "model_number": "SPDK bdev Controller", 00:24:35.582 "serial_number": "00000000000000000000", 00:24:35.582 "firmware_revision": "25.01", 00:24:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.582 "oacs": { 00:24:35.582 "security": 0, 00:24:35.582 "format": 0, 00:24:35.582 "firmware": 0, 00:24:35.582 "ns_manage": 0 00:24:35.582 }, 00:24:35.582 "multi_ctrlr": true, 00:24:35.582 "ana_reporting": false 00:24:35.582 }, 00:24:35.582 "vs": { 00:24:35.582 "nvme_version": "1.3" 00:24:35.582 }, 00:24:35.582 "ns_data": { 00:24:35.582 "id": 1, 00:24:35.582 "can_share": true 00:24:35.582 } 00:24:35.582 } 00:24:35.582 ], 00:24:35.582 "mp_policy": "active_passive" 00:24:35.582 } 00:24:35.582 } 00:24:35.582 ] 00:24:35.582 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.582 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:35.582 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.582 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.582 [2024-12-06 18:36:30.240490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:35.582 [2024-12-06 18:36:30.240580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2542880 (9): Bad file descriptor 00:24:35.843 [2024-12-06 18:36:30.372746] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:35.843 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.843 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:35.843 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.843 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.843 [ 00:24:35.843 { 00:24:35.843 "name": "nvme0n1", 00:24:35.843 "aliases": [ 00:24:35.843 "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5" 00:24:35.843 ], 00:24:35.843 "product_name": "NVMe disk", 00:24:35.843 "block_size": 512, 00:24:35.843 "num_blocks": 2097152, 00:24:35.843 "uuid": "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5", 00:24:35.843 "numa_id": 0, 00:24:35.843 "assigned_rate_limits": { 00:24:35.843 "rw_ios_per_sec": 0, 00:24:35.843 "rw_mbytes_per_sec": 0, 00:24:35.843 "r_mbytes_per_sec": 0, 00:24:35.843 "w_mbytes_per_sec": 0 00:24:35.843 }, 00:24:35.843 "claimed": false, 00:24:35.843 "zoned": false, 00:24:35.843 "supported_io_types": { 00:24:35.843 "read": true, 00:24:35.843 "write": true, 00:24:35.843 "unmap": false, 00:24:35.843 "flush": true, 00:24:35.843 "reset": true, 00:24:35.843 "nvme_admin": true, 00:24:35.843 "nvme_io": true, 00:24:35.843 "nvme_io_md": false, 00:24:35.843 "write_zeroes": true, 00:24:35.843 "zcopy": false, 00:24:35.843 "get_zone_info": false, 00:24:35.843 "zone_management": false, 00:24:35.843 "zone_append": false, 00:24:35.843 "compare": true, 00:24:35.843 "compare_and_write": true, 00:24:35.843 "abort": true, 00:24:35.843 "seek_hole": false, 00:24:35.843 "seek_data": false, 00:24:35.843 "copy": true, 00:24:35.843 "nvme_iov_md": false 00:24:35.843 }, 00:24:35.843 "memory_domains": [ 00:24:35.843 { 00:24:35.843 "dma_device_id": "system", 00:24:35.843 "dma_device_type": 1 00:24:35.843 } 00:24:35.843 ], 00:24:35.843 "driver_specific": { 00:24:35.843 "nvme": [ 00:24:35.843 { 00:24:35.843 "trid": { 00:24:35.843 "trtype": "TCP", 00:24:35.843 "adrfam": "IPv4", 00:24:35.843 "traddr": "10.0.0.2", 00:24:35.843 "trsvcid": "4420", 00:24:35.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:35.843 }, 00:24:35.843 "ctrlr_data": { 00:24:35.843 "cntlid": 2, 00:24:35.844 "vendor_id": "0x8086", 00:24:35.844 "model_number": "SPDK bdev Controller", 00:24:35.844 "serial_number": "00000000000000000000", 00:24:35.844 "firmware_revision": "25.01", 00:24:35.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.844 "oacs": { 00:24:35.844 "security": 0, 00:24:35.844 "format": 0, 00:24:35.844 "firmware": 0, 00:24:35.844 "ns_manage": 0 00:24:35.844 }, 00:24:35.844 "multi_ctrlr": true, 00:24:35.844 "ana_reporting": false 00:24:35.844 }, 00:24:35.844 "vs": { 00:24:35.844 "nvme_version": "1.3" 00:24:35.844 }, 00:24:35.844 "ns_data": { 00:24:35.844 "id": 1, 00:24:35.844 "can_share": true 00:24:35.844 } 00:24:35.844 } 00:24:35.844 ], 00:24:35.844 "mp_policy": "active_passive" 00:24:35.844 } 00:24:35.844 } 00:24:35.844 ] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yBhnJ5rvsj 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yBhnJ5rvsj 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yBhnJ5rvsj 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 [2024-12-06 18:36:30.461185] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.844 [2024-12-06 18:36:30.461345] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 [2024-12-06 18:36:30.485257] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:35.844 nvme0n1 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 [ 00:24:35.844 { 00:24:35.844 "name": "nvme0n1", 00:24:35.844 "aliases": [ 00:24:35.844 "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5" 00:24:35.844 ], 00:24:35.844 "product_name": "NVMe disk", 00:24:35.844 "block_size": 512, 00:24:35.844 "num_blocks": 2097152, 00:24:35.844 "uuid": "ef8fd572-a0ea-402a-b092-eeabcf0a1cd5", 00:24:35.844 "numa_id": 0, 00:24:35.844 "assigned_rate_limits": { 00:24:35.844 "rw_ios_per_sec": 0, 00:24:35.844 "rw_mbytes_per_sec": 0, 00:24:35.844 "r_mbytes_per_sec": 0, 00:24:35.844 "w_mbytes_per_sec": 0 00:24:35.844 }, 00:24:35.844 "claimed": false, 00:24:35.844 "zoned": false, 00:24:35.844 "supported_io_types": { 00:24:35.844 "read": true, 00:24:35.844 "write": true, 00:24:35.844 "unmap": false, 00:24:35.844 "flush": true, 00:24:35.844 "reset": true, 00:24:35.844 "nvme_admin": true, 00:24:35.844 "nvme_io": true, 00:24:35.844 "nvme_io_md": false, 00:24:35.844 "write_zeroes": true, 00:24:35.844 "zcopy": false, 00:24:35.844 "get_zone_info": false, 00:24:35.844 "zone_management": false, 00:24:35.844 "zone_append": false, 00:24:35.844 "compare": true, 00:24:35.844 "compare_and_write": true, 00:24:35.844 "abort": true, 00:24:35.844 "seek_hole": false, 00:24:35.844 "seek_data": false, 00:24:35.844 "copy": true, 00:24:35.844 "nvme_iov_md": false 00:24:35.844 }, 00:24:35.844 "memory_domains": [ 00:24:35.844 { 00:24:35.844 "dma_device_id": "system", 00:24:35.844 "dma_device_type": 1 00:24:35.844 } 00:24:35.844 ], 00:24:35.844 "driver_specific": { 00:24:35.844 "nvme": [ 00:24:35.844 { 00:24:35.844 "trid": { 00:24:35.844 "trtype": "TCP", 00:24:35.844 "adrfam": "IPv4", 00:24:35.844 "traddr": "10.0.0.2", 00:24:35.844 "trsvcid": "4421", 00:24:35.844 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:35.844 }, 00:24:35.844 "ctrlr_data": { 00:24:35.844 "cntlid": 3, 00:24:35.844 "vendor_id": "0x8086", 00:24:35.844 "model_number": "SPDK bdev Controller", 00:24:35.844 "serial_number": "00000000000000000000", 00:24:35.844 "firmware_revision": "25.01", 00:24:35.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:35.844 "oacs": { 00:24:35.844 "security": 0, 00:24:35.844 "format": 0, 00:24:35.844 "firmware": 0, 00:24:35.844 "ns_manage": 0 00:24:35.844 }, 00:24:35.844 "multi_ctrlr": true, 00:24:35.844 "ana_reporting": false 00:24:35.844 }, 00:24:35.844 "vs": { 00:24:35.844 "nvme_version": "1.3" 00:24:35.844 }, 00:24:35.844 "ns_data": { 00:24:35.844 "id": 1, 00:24:35.844 "can_share": true 00:24:35.844 } 00:24:35.844 } 00:24:35.844 ], 00:24:35.844 "mp_policy": "active_passive" 00:24:35.844 } 00:24:35.844 } 00:24:35.844 ] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yBhnJ5rvsj 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:35.844 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:35.844 rmmod nvme_tcp 00:24:36.106 rmmod nvme_fabrics 00:24:36.106 rmmod nvme_keyring 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2225229 ']' 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2225229 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2225229 ']' 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2225229 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225229 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225229' 00:24:36.106 killing process with pid 2225229 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2225229 00:24:36.106 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2225229 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.367 18:36:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.282 18:36:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:38.282 00:24:38.282 real 0m11.779s 00:24:38.282 user 0m4.251s 00:24:38.282 sys 0m6.120s 00:24:38.282 18:36:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.282 18:36:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.282 ************************************ 00:24:38.282 END TEST nvmf_async_init 00:24:38.282 ************************************ 00:24:38.282 18:36:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:38.282 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.282 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.282 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.543 ************************************ 00:24:38.543 START TEST dma 00:24:38.543 ************************************ 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:38.543 * Looking for test storage... 00:24:38.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:38.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.543 --rc genhtml_branch_coverage=1 00:24:38.543 --rc genhtml_function_coverage=1 00:24:38.543 --rc genhtml_legend=1 00:24:38.543 --rc geninfo_all_blocks=1 00:24:38.543 --rc geninfo_unexecuted_blocks=1 00:24:38.543 00:24:38.543 ' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:38.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.543 --rc genhtml_branch_coverage=1 00:24:38.543 --rc genhtml_function_coverage=1 00:24:38.543 --rc genhtml_legend=1 00:24:38.543 --rc geninfo_all_blocks=1 00:24:38.543 --rc geninfo_unexecuted_blocks=1 00:24:38.543 00:24:38.543 ' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:38.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.543 --rc genhtml_branch_coverage=1 00:24:38.543 --rc genhtml_function_coverage=1 00:24:38.543 --rc genhtml_legend=1 00:24:38.543 --rc geninfo_all_blocks=1 00:24:38.543 --rc geninfo_unexecuted_blocks=1 00:24:38.543 00:24:38.543 ' 00:24:38.543 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:38.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.544 --rc genhtml_branch_coverage=1 00:24:38.544 --rc genhtml_function_coverage=1 00:24:38.544 --rc genhtml_legend=1 00:24:38.544 --rc geninfo_all_blocks=1 00:24:38.544 --rc geninfo_unexecuted_blocks=1 00:24:38.544 00:24:38.544 ' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:38.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:38.544 00:24:38.544 real 0m0.241s 00:24:38.544 user 0m0.135s 00:24:38.544 sys 0m0.118s 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.544 18:36:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:38.544 ************************************ 00:24:38.544 END TEST dma 00:24:38.544 ************************************ 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.806 ************************************ 00:24:38.806 START TEST nvmf_identify 00:24:38.806 ************************************ 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:38.806 * Looking for test storage... 00:24:38.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.806 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.068 --rc genhtml_branch_coverage=1 00:24:39.068 --rc genhtml_function_coverage=1 00:24:39.068 --rc genhtml_legend=1 00:24:39.068 --rc geninfo_all_blocks=1 00:24:39.068 --rc geninfo_unexecuted_blocks=1 00:24:39.068 00:24:39.068 ' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.068 --rc genhtml_branch_coverage=1 00:24:39.068 --rc genhtml_function_coverage=1 00:24:39.068 --rc genhtml_legend=1 00:24:39.068 --rc geninfo_all_blocks=1 00:24:39.068 --rc geninfo_unexecuted_blocks=1 00:24:39.068 00:24:39.068 ' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.068 --rc genhtml_branch_coverage=1 00:24:39.068 --rc genhtml_function_coverage=1 00:24:39.068 --rc genhtml_legend=1 00:24:39.068 --rc geninfo_all_blocks=1 00:24:39.068 --rc geninfo_unexecuted_blocks=1 00:24:39.068 00:24:39.068 ' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:39.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.068 --rc genhtml_branch_coverage=1 00:24:39.068 --rc genhtml_function_coverage=1 00:24:39.068 --rc genhtml_legend=1 00:24:39.068 --rc geninfo_all_blocks=1 00:24:39.068 --rc geninfo_unexecuted_blocks=1 00:24:39.068 00:24:39.068 ' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.068 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:39.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:39.069 18:36:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.221 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:47.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:47.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:47.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:47.222 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:47.222 18:36:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:47.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:47.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:47.222 00:24:47.222 --- 10.0.0.2 ping statistics --- 00:24:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.222 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:47.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:47.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:24:47.222 00:24:47.222 --- 10.0.0.1 ping statistics --- 00:24:47.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:47.222 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:47.222 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2229942 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2229942 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2229942 ']' 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:47.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.223 18:36:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.223 [2024-12-06 18:36:41.252196] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:47.223 [2024-12-06 18:36:41.252260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.223 [2024-12-06 18:36:41.360343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:47.223 [2024-12-06 18:36:41.414710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.223 [2024-12-06 18:36:41.414772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.223 [2024-12-06 18:36:41.414781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.223 [2024-12-06 18:36:41.414788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.223 [2024-12-06 18:36:41.414795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.223 [2024-12-06 18:36:41.417205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.223 [2024-12-06 18:36:41.417366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.223 [2024-12-06 18:36:41.417528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.223 [2024-12-06 18:36:41.417528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 [2024-12-06 18:36:42.090894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 Malloc0 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.484 [2024-12-06 18:36:42.209456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.484 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.485 [ 00:24:47.485 { 00:24:47.485 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:47.485 "subtype": "Discovery", 00:24:47.485 "listen_addresses": [ 00:24:47.485 { 00:24:47.485 "trtype": "TCP", 00:24:47.485 "adrfam": "IPv4", 00:24:47.485 "traddr": "10.0.0.2", 00:24:47.485 "trsvcid": "4420" 00:24:47.485 } 00:24:47.485 ], 00:24:47.485 "allow_any_host": true, 00:24:47.485 "hosts": [] 00:24:47.485 }, 00:24:47.485 { 00:24:47.485 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.485 "subtype": "NVMe", 00:24:47.485 "listen_addresses": [ 00:24:47.485 { 00:24:47.485 "trtype": "TCP", 00:24:47.485 "adrfam": "IPv4", 00:24:47.485 "traddr": "10.0.0.2", 00:24:47.485 "trsvcid": "4420" 00:24:47.485 } 00:24:47.485 ], 00:24:47.485 "allow_any_host": true, 00:24:47.485 "hosts": [], 00:24:47.485 "serial_number": "SPDK00000000000001", 00:24:47.485 "model_number": "SPDK bdev Controller", 00:24:47.485 "max_namespaces": 32, 00:24:47.485 "min_cntlid": 1, 00:24:47.485 "max_cntlid": 65519, 00:24:47.485 "namespaces": [ 00:24:47.485 { 00:24:47.485 "nsid": 1, 00:24:47.485 "bdev_name": "Malloc0", 00:24:47.485 "name": "Malloc0", 00:24:47.485 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:47.485 "eui64": "ABCDEF0123456789", 00:24:47.485 "uuid": "29a83555-6a19-4f70-a5da-17d8ab7d13c1" 00:24:47.485 } 00:24:47.485 ] 00:24:47.485 } 00:24:47.485 ] 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.485 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:47.750 [2024-12-06 18:36:42.274890] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:47.750 [2024-12-06 18:36:42.274938] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230218 ] 00:24:47.750 [2024-12-06 18:36:42.330171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:47.750 [2024-12-06 18:36:42.330236] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.750 [2024-12-06 18:36:42.330242] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.750 [2024-12-06 18:36:42.330261] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.750 [2024-12-06 18:36:42.330271] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.750 [2024-12-06 18:36:42.334050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:47.750 [2024-12-06 18:36:42.334099] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf7b690 0 00:24:47.750 [2024-12-06 18:36:42.341658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.750 [2024-12-06 18:36:42.341675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.750 [2024-12-06 18:36:42.341681] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.750 [2024-12-06 18:36:42.341684] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.751 [2024-12-06 18:36:42.341729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.341735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.341740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.341761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.751 [2024-12-06 18:36:42.341783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.348657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.348670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.348674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.348679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.348694] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.751 [2024-12-06 18:36:42.348703] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:47.751 [2024-12-06 18:36:42.348708] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:47.751 [2024-12-06 18:36:42.348724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.348728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.348732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.348740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.348758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.348971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.348978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.348981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.348985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.348991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:47.751 [2024-12-06 18:36:42.348999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:47.751 [2024-12-06 18:36:42.349006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.349020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.349031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.349250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.349256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.349260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.349269] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:47.751 [2024-12-06 18:36:42.349278] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.349285] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.349303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.349314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.349515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.349521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.349525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.349534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.349543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.349557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.349567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.349762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.349769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.349773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.349781] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:47.751 [2024-12-06 18:36:42.349786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.349794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.349907] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:47.751 [2024-12-06 18:36:42.349912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.349921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.349928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.349935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.349946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.350142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.350151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.751 [2024-12-06 18:36:42.350155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.350158] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.751 [2024-12-06 18:36:42.350163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.751 [2024-12-06 18:36:42.350173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.350177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.751 [2024-12-06 18:36:42.350184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.751 [2024-12-06 18:36:42.350191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.751 [2024-12-06 18:36:42.350201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.751 [2024-12-06 18:36:42.350385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.751 [2024-12-06 18:36:42.350392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.752 [2024-12-06 18:36:42.350395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.752 [2024-12-06 18:36:42.350403] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.752 [2024-12-06 18:36:42.350408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.350416] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:47.752 [2024-12-06 18:36:42.350424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.350435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.350445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.752 [2024-12-06 18:36:42.350456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.752 [2024-12-06 18:36:42.350703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.752 [2024-12-06 18:36:42.350710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.752 [2024-12-06 18:36:42.350713] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf7b690): datao=0, datal=4096, cccid=0 00:24:47.752 [2024-12-06 18:36:42.350723] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd100) on tqpair(0xf7b690): expected_datao=0, payload_size=4096 00:24:47.752 [2024-12-06 18:36:42.350727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350748] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350753] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.752 [2024-12-06 18:36:42.350896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.752 [2024-12-06 18:36:42.350900] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.752 [2024-12-06 18:36:42.350913] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:47.752 [2024-12-06 18:36:42.350921] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:47.752 [2024-12-06 18:36:42.350926] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:47.752 [2024-12-06 18:36:42.350931] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:47.752 [2024-12-06 18:36:42.350936] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:47.752 [2024-12-06 18:36:42.350943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.350952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.350959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.350967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.350974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.752 [2024-12-06 18:36:42.350985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.752 [2024-12-06 18:36:42.351156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.752 [2024-12-06 18:36:42.351162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.752 [2024-12-06 18:36:42.351166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.752 [2024-12-06 18:36:42.351178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.351192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.752 [2024-12-06 18:36:42.351198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.351211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.752 [2024-12-06 18:36:42.351217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.351231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.752 [2024-12-06 18:36:42.351237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.351250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.752 [2024-12-06 18:36:42.351255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.351268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.752 [2024-12-06 18:36:42.351274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf7b690) 00:24:47.752 [2024-12-06 18:36:42.351285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.752 [2024-12-06 18:36:42.351297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd100, cid 0, qid 0 00:24:47.752 [2024-12-06 18:36:42.351305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd280, cid 1, qid 0 00:24:47.752 [2024-12-06 18:36:42.351310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd400, cid 2, qid 0 00:24:47.752 [2024-12-06 18:36:42.351315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.752 [2024-12-06 18:36:42.351319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd700, cid 4, qid 0 00:24:47.752 [2024-12-06 18:36:42.351552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.752 [2024-12-06 18:36:42.351558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.752 [2024-12-06 18:36:42.351562] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd700) on tqpair=0xf7b690 00:24:47.752 [2024-12-06 18:36:42.351571] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:47.752 [2024-12-06 18:36:42.351577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:47.752 [2024-12-06 18:36:42.351587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.752 [2024-12-06 18:36:42.351591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf7b690) 00:24:47.753 [2024-12-06 18:36:42.351597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.753 [2024-12-06 18:36:42.351608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd700, cid 4, qid 0 00:24:47.753 [2024-12-06 18:36:42.351794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.753 [2024-12-06 18:36:42.351801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.753 [2024-12-06 18:36:42.351805] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.351808] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf7b690): datao=0, datal=4096, cccid=4 00:24:47.753 [2024-12-06 18:36:42.351813] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd700) on tqpair(0xf7b690): expected_datao=0, payload_size=4096 00:24:47.753 [2024-12-06 18:36:42.351817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.351824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.351828] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.351994] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.753 [2024-12-06 18:36:42.352001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.753 [2024-12-06 18:36:42.352004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd700) on tqpair=0xf7b690 00:24:47.753 [2024-12-06 18:36:42.352020] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:47.753 [2024-12-06 18:36:42.352044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf7b690) 00:24:47.753 [2024-12-06 18:36:42.352055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.753 [2024-12-06 18:36:42.352062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf7b690) 00:24:47.753 [2024-12-06 18:36:42.352076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.753 [2024-12-06 18:36:42.352093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd700, cid 4, qid 0 00:24:47.753 [2024-12-06 18:36:42.352098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd880, cid 5, qid 0 00:24:47.753 [2024-12-06 18:36:42.352322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.753 [2024-12-06 18:36:42.352330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.753 [2024-12-06 18:36:42.352334] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352338] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf7b690): datao=0, datal=1024, cccid=4 00:24:47.753 [2024-12-06 18:36:42.352342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd700) on tqpair(0xf7b690): expected_datao=0, payload_size=1024 00:24:47.753 [2024-12-06 18:36:42.352346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352353] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352357] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.753 [2024-12-06 18:36:42.352368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.753 [2024-12-06 18:36:42.352372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.352376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd880) on tqpair=0xf7b690 00:24:47.753 [2024-12-06 18:36:42.395652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.753 [2024-12-06 18:36:42.395669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.753 [2024-12-06 18:36:42.395673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.395678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd700) on tqpair=0xf7b690 00:24:47.753 [2024-12-06 18:36:42.395694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.395698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf7b690) 00:24:47.753 [2024-12-06 18:36:42.395706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.753 [2024-12-06 18:36:42.395726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd700, cid 4, qid 0 00:24:47.753 [2024-12-06 18:36:42.395917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.753 [2024-12-06 18:36:42.395923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.753 [2024-12-06 18:36:42.395927] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.395931] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf7b690): datao=0, datal=3072, cccid=4 00:24:47.753 [2024-12-06 18:36:42.395935] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd700) on tqpair(0xf7b690): expected_datao=0, payload_size=3072 00:24:47.753 [2024-12-06 18:36:42.395940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.395958] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.395963] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.753 [2024-12-06 18:36:42.396132] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.753 [2024-12-06 18:36:42.396135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396139] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd700) on tqpair=0xf7b690 00:24:47.753 [2024-12-06 18:36:42.396148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf7b690) 00:24:47.753 [2024-12-06 18:36:42.396158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.753 [2024-12-06 18:36:42.396179] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd700, cid 4, qid 0 00:24:47.753 [2024-12-06 18:36:42.396382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.753 [2024-12-06 18:36:42.396388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.753 [2024-12-06 18:36:42.396391] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396395] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf7b690): datao=0, datal=8, cccid=4 00:24:47.753 [2024-12-06 18:36:42.396399] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfdd700) on tqpair(0xf7b690): expected_datao=0, payload_size=8 00:24:47.753 [2024-12-06 18:36:42.396404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.396414] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.437823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.753 [2024-12-06 18:36:42.437836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.753 [2024-12-06 18:36:42.437840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.753 [2024-12-06 18:36:42.437844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd700) on tqpair=0xf7b690 00:24:47.753 ===================================================== 00:24:47.753 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:47.753 ===================================================== 00:24:47.753 Controller Capabilities/Features 00:24:47.753 ================================ 00:24:47.753 Vendor ID: 0000 00:24:47.753 Subsystem Vendor ID: 0000 00:24:47.753 Serial Number: .................... 00:24:47.753 Model Number: ........................................ 00:24:47.753 Firmware Version: 25.01 00:24:47.753 Recommended Arb Burst: 0 00:24:47.753 IEEE OUI Identifier: 00 00 00 00:24:47.754 Multi-path I/O 00:24:47.754 May have multiple subsystem ports: No 00:24:47.754 May have multiple controllers: No 00:24:47.754 Associated with SR-IOV VF: No 00:24:47.754 Max Data Transfer Size: 131072 00:24:47.754 Max Number of Namespaces: 0 00:24:47.754 Max Number of I/O Queues: 1024 00:24:47.754 NVMe Specification Version (VS): 1.3 00:24:47.754 NVMe Specification Version (Identify): 1.3 00:24:47.754 Maximum Queue Entries: 128 00:24:47.754 Contiguous Queues Required: Yes 00:24:47.754 Arbitration Mechanisms Supported 00:24:47.754 Weighted Round Robin: Not Supported 00:24:47.754 Vendor Specific: Not Supported 00:24:47.754 Reset Timeout: 15000 ms 00:24:47.754 Doorbell Stride: 4 bytes 00:24:47.754 NVM Subsystem Reset: Not Supported 00:24:47.754 Command Sets Supported 00:24:47.754 NVM Command Set: Supported 00:24:47.754 Boot Partition: Not Supported 00:24:47.754 Memory Page Size Minimum: 4096 bytes 00:24:47.754 Memory Page Size Maximum: 4096 bytes 00:24:47.754 Persistent Memory Region: Not Supported 00:24:47.754 Optional Asynchronous Events Supported 00:24:47.754 Namespace Attribute Notices: Not Supported 00:24:47.754 Firmware Activation Notices: Not Supported 00:24:47.754 ANA Change Notices: Not Supported 00:24:47.754 PLE Aggregate Log Change Notices: Not Supported 00:24:47.754 LBA Status Info Alert Notices: Not Supported 00:24:47.754 EGE Aggregate Log Change Notices: Not Supported 00:24:47.754 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.754 Zone Descriptor Change Notices: Not Supported 00:24:47.754 Discovery Log Change Notices: Supported 00:24:47.754 Controller Attributes 00:24:47.754 128-bit Host Identifier: Not Supported 00:24:47.754 Non-Operational Permissive Mode: Not Supported 00:24:47.754 NVM Sets: Not Supported 00:24:47.754 Read Recovery Levels: Not Supported 00:24:47.754 Endurance Groups: Not Supported 00:24:47.754 Predictable Latency Mode: Not Supported 00:24:47.754 Traffic Based Keep ALive: Not Supported 00:24:47.754 Namespace Granularity: Not Supported 00:24:47.754 SQ Associations: Not Supported 00:24:47.754 UUID List: Not Supported 00:24:47.754 Multi-Domain Subsystem: Not Supported 00:24:47.754 Fixed Capacity Management: Not Supported 00:24:47.754 Variable Capacity Management: Not Supported 00:24:47.754 Delete Endurance Group: Not Supported 00:24:47.754 Delete NVM Set: Not Supported 00:24:47.754 Extended LBA Formats Supported: Not Supported 00:24:47.754 Flexible Data Placement Supported: Not Supported 00:24:47.754 00:24:47.754 Controller Memory Buffer Support 00:24:47.754 ================================ 00:24:47.754 Supported: No 00:24:47.754 00:24:47.754 Persistent Memory Region Support 00:24:47.754 ================================ 00:24:47.754 Supported: No 00:24:47.754 00:24:47.754 Admin Command Set Attributes 00:24:47.754 ============================ 00:24:47.754 Security Send/Receive: Not Supported 00:24:47.754 Format NVM: Not Supported 00:24:47.754 Firmware Activate/Download: Not Supported 00:24:47.754 Namespace Management: Not Supported 00:24:47.754 Device Self-Test: Not Supported 00:24:47.754 Directives: Not Supported 00:24:47.754 NVMe-MI: Not Supported 00:24:47.754 Virtualization Management: Not Supported 00:24:47.754 Doorbell Buffer Config: Not Supported 00:24:47.754 Get LBA Status Capability: Not Supported 00:24:47.754 Command & Feature Lockdown Capability: Not Supported 00:24:47.754 Abort Command Limit: 1 00:24:47.754 Async Event Request Limit: 4 00:24:47.754 Number of Firmware Slots: N/A 00:24:47.754 Firmware Slot 1 Read-Only: N/A 00:24:47.754 Firmware Activation Without Reset: N/A 00:24:47.754 Multiple Update Detection Support: N/A 00:24:47.754 Firmware Update Granularity: No Information Provided 00:24:47.754 Per-Namespace SMART Log: No 00:24:47.754 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.754 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:47.754 Command Effects Log Page: Not Supported 00:24:47.754 Get Log Page Extended Data: Supported 00:24:47.754 Telemetry Log Pages: Not Supported 00:24:47.754 Persistent Event Log Pages: Not Supported 00:24:47.754 Supported Log Pages Log Page: May Support 00:24:47.754 Commands Supported & Effects Log Page: Not Supported 00:24:47.754 Feature Identifiers & Effects Log Page:May Support 00:24:47.754 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.754 Data Area 4 for Telemetry Log: Not Supported 00:24:47.754 Error Log Page Entries Supported: 128 00:24:47.754 Keep Alive: Not Supported 00:24:47.754 00:24:47.754 NVM Command Set Attributes 00:24:47.754 ========================== 00:24:47.754 Submission Queue Entry Size 00:24:47.754 Max: 1 00:24:47.754 Min: 1 00:24:47.754 Completion Queue Entry Size 00:24:47.754 Max: 1 00:24:47.754 Min: 1 00:24:47.754 Number of Namespaces: 0 00:24:47.754 Compare Command: Not Supported 00:24:47.754 Write Uncorrectable Command: Not Supported 00:24:47.754 Dataset Management Command: Not Supported 00:24:47.754 Write Zeroes Command: Not Supported 00:24:47.754 Set Features Save Field: Not Supported 00:24:47.754 Reservations: Not Supported 00:24:47.754 Timestamp: Not Supported 00:24:47.754 Copy: Not Supported 00:24:47.754 Volatile Write Cache: Not Present 00:24:47.754 Atomic Write Unit (Normal): 1 00:24:47.754 Atomic Write Unit (PFail): 1 00:24:47.754 Atomic Compare & Write Unit: 1 00:24:47.754 Fused Compare & Write: Supported 00:24:47.754 Scatter-Gather List 00:24:47.754 SGL Command Set: Supported 00:24:47.754 SGL Keyed: Supported 00:24:47.754 SGL Bit Bucket Descriptor: Not Supported 00:24:47.754 SGL Metadata Pointer: Not Supported 00:24:47.754 Oversized SGL: Not Supported 00:24:47.754 SGL Metadata Address: Not Supported 00:24:47.754 SGL Offset: Supported 00:24:47.754 Transport SGL Data Block: Not Supported 00:24:47.754 Replay Protected Memory Block: Not Supported 00:24:47.754 00:24:47.754 Firmware Slot Information 00:24:47.754 ========================= 00:24:47.754 Active slot: 0 00:24:47.754 00:24:47.754 00:24:47.754 Error Log 00:24:47.754 ========= 00:24:47.754 00:24:47.754 Active Namespaces 00:24:47.754 ================= 00:24:47.754 Discovery Log Page 00:24:47.754 ================== 00:24:47.754 Generation Counter: 2 00:24:47.754 Number of Records: 2 00:24:47.754 Record Format: 0 00:24:47.754 00:24:47.754 Discovery Log Entry 0 00:24:47.754 ---------------------- 00:24:47.754 Transport Type: 3 (TCP) 00:24:47.754 Address Family: 1 (IPv4) 00:24:47.754 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:47.754 Entry Flags: 00:24:47.754 Duplicate Returned Information: 1 00:24:47.754 Explicit Persistent Connection Support for Discovery: 1 00:24:47.754 Transport Requirements: 00:24:47.754 Secure Channel: Not Required 00:24:47.754 Port ID: 0 (0x0000) 00:24:47.754 Controller ID: 65535 (0xffff) 00:24:47.754 Admin Max SQ Size: 128 00:24:47.754 Transport Service Identifier: 4420 00:24:47.754 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:47.755 Transport Address: 10.0.0.2 00:24:47.755 Discovery Log Entry 1 00:24:47.755 ---------------------- 00:24:47.755 Transport Type: 3 (TCP) 00:24:47.755 Address Family: 1 (IPv4) 00:24:47.755 Subsystem Type: 2 (NVM Subsystem) 00:24:47.755 Entry Flags: 00:24:47.755 Duplicate Returned Information: 0 00:24:47.755 Explicit Persistent Connection Support for Discovery: 0 00:24:47.755 Transport Requirements: 00:24:47.755 Secure Channel: Not Required 00:24:47.755 Port ID: 0 (0x0000) 00:24:47.755 Controller ID: 65535 (0xffff) 00:24:47.755 Admin Max SQ Size: 128 00:24:47.755 Transport Service Identifier: 4420 00:24:47.755 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:47.755 Transport Address: 10.0.0.2 [2024-12-06 18:36:42.437950] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:47.755 [2024-12-06 18:36:42.437962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd100) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.437969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.755 [2024-12-06 18:36:42.437974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd280) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.437979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.755 [2024-12-06 18:36:42.437984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd400) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.437989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.755 [2024-12-06 18:36:42.437994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.437998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.755 [2024-12-06 18:36:42.438010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.438026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.438041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.438162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.438168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.438172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.438183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.438197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.438213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.438433] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.438440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.438443] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.438452] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:47.755 [2024-12-06 18:36:42.438457] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:47.755 [2024-12-06 18:36:42.438467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.438482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.438492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.438690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.438697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.438700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.438715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.438729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.438740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.438951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.438957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.438961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.438974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.438982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.438988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.438999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.439192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.439198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.439202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.439206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.439217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.439221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.439224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.755 [2024-12-06 18:36:42.439233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.755 [2024-12-06 18:36:42.439244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.755 [2024-12-06 18:36:42.439425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.755 [2024-12-06 18:36:42.439431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.755 [2024-12-06 18:36:42.439434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.439438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.755 [2024-12-06 18:36:42.439448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.755 [2024-12-06 18:36:42.439452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.439462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.439472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.439647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.439654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.439657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.439671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.439685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.439696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.439871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.439877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.439881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.439894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.439902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.439908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.439919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.440098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.440104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.440108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.440122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.440138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.440148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.440373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.440380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.440383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.440397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.440411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.440421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.440596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.440603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.440606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.440620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.440627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf7b690) 00:24:47.756 [2024-12-06 18:36:42.440634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.756 [2024-12-06 18:36:42.444652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfdd580, cid 3, qid 0 00:24:47.756 [2024-12-06 18:36:42.444844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.756 [2024-12-06 18:36:42.444851] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.756 [2024-12-06 18:36:42.444854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.756 [2024-12-06 18:36:42.444858] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfdd580) on tqpair=0xf7b690 00:24:47.756 [2024-12-06 18:36:42.444866] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:47.756 00:24:47.756 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:47.756 [2024-12-06 18:36:42.490602] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:47.756 [2024-12-06 18:36:42.490662] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230294 ] 00:24:48.020 [2024-12-06 18:36:42.547261] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:48.020 [2024-12-06 18:36:42.547324] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:48.020 [2024-12-06 18:36:42.547330] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:48.020 [2024-12-06 18:36:42.547355] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:48.020 [2024-12-06 18:36:42.547366] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:48.020 [2024-12-06 18:36:42.548103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:48.020 [2024-12-06 18:36:42.548144] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2426690 0 00:24:48.020 [2024-12-06 18:36:42.558656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:48.020 [2024-12-06 18:36:42.558671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:48.020 [2024-12-06 18:36:42.558676] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:48.020 [2024-12-06 18:36:42.558680] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:48.020 [2024-12-06 18:36:42.558719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.558725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.558729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.020 [2024-12-06 18:36:42.558742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:48.020 [2024-12-06 18:36:42.558764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.020 [2024-12-06 18:36:42.566649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.020 [2024-12-06 18:36:42.566659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.020 [2024-12-06 18:36:42.566663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.020 [2024-12-06 18:36:42.566678] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:48.020 [2024-12-06 18:36:42.566686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:48.020 [2024-12-06 18:36:42.566691] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:48.020 [2024-12-06 18:36:42.566706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.020 [2024-12-06 18:36:42.566722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.020 [2024-12-06 18:36:42.566737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.020 [2024-12-06 18:36:42.566917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.020 [2024-12-06 18:36:42.566924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.020 [2024-12-06 18:36:42.566928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.020 [2024-12-06 18:36:42.566937] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:48.020 [2024-12-06 18:36:42.566946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:48.020 [2024-12-06 18:36:42.566953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.566960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.020 [2024-12-06 18:36:42.566967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.020 [2024-12-06 18:36:42.566983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.020 [2024-12-06 18:36:42.567166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.020 [2024-12-06 18:36:42.567173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.020 [2024-12-06 18:36:42.567177] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.020 [2024-12-06 18:36:42.567186] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:48.020 [2024-12-06 18:36:42.567195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:48.020 [2024-12-06 18:36:42.567201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.020 [2024-12-06 18:36:42.567215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.020 [2024-12-06 18:36:42.567226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.020 [2024-12-06 18:36:42.567410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.020 [2024-12-06 18:36:42.567417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.020 [2024-12-06 18:36:42.567420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.020 [2024-12-06 18:36:42.567430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:48.020 [2024-12-06 18:36:42.567440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.020 [2024-12-06 18:36:42.567454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.020 [2024-12-06 18:36:42.567464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.020 [2024-12-06 18:36:42.567648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.020 [2024-12-06 18:36:42.567655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.020 [2024-12-06 18:36:42.567658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.020 [2024-12-06 18:36:42.567662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.020 [2024-12-06 18:36:42.567667] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:48.021 [2024-12-06 18:36:42.567672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:48.021 [2024-12-06 18:36:42.567681] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:48.021 [2024-12-06 18:36:42.567789] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:48.021 [2024-12-06 18:36:42.567794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:48.021 [2024-12-06 18:36:42.567803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.567806] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.567810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.567819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.021 [2024-12-06 18:36:42.567831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.021 [2024-12-06 18:36:42.568040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.021 [2024-12-06 18:36:42.568046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.021 [2024-12-06 18:36:42.568050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.021 [2024-12-06 18:36:42.568058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:48.021 [2024-12-06 18:36:42.568069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.568083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.021 [2024-12-06 18:36:42.568093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.021 [2024-12-06 18:36:42.568273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.021 [2024-12-06 18:36:42.568279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.021 [2024-12-06 18:36:42.568283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.021 [2024-12-06 18:36:42.568291] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:48.021 [2024-12-06 18:36:42.568296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.568305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:48.021 [2024-12-06 18:36:42.568315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.568325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.568335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.021 [2024-12-06 18:36:42.568346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.021 [2024-12-06 18:36:42.568589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.021 [2024-12-06 18:36:42.568596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.021 [2024-12-06 18:36:42.568600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=4096, cccid=0 00:24:48.021 [2024-12-06 18:36:42.568609] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488100) on tqpair(0x2426690): expected_datao=0, payload_size=4096 00:24:48.021 [2024-12-06 18:36:42.568613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568625] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.021 [2024-12-06 18:36:42.568787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.021 [2024-12-06 18:36:42.568791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568795] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.021 [2024-12-06 18:36:42.568803] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:48.021 [2024-12-06 18:36:42.568810] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:48.021 [2024-12-06 18:36:42.568815] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:48.021 [2024-12-06 18:36:42.568819] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:48.021 [2024-12-06 18:36:42.568824] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:48.021 [2024-12-06 18:36:42.568829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.568838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.568844] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.568852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.568859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:48.021 [2024-12-06 18:36:42.568871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.021 [2024-12-06 18:36:42.569069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.021 [2024-12-06 18:36:42.569075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.021 [2024-12-06 18:36:42.569079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.021 [2024-12-06 18:36:42.569090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.569103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.021 [2024-12-06 18:36:42.569109] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.569123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.021 [2024-12-06 18:36:42.569129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569132] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.569142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.021 [2024-12-06 18:36:42.569148] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.021 [2024-12-06 18:36:42.569155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.021 [2024-12-06 18:36:42.569163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.021 [2024-12-06 18:36:42.569168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.569179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:48.021 [2024-12-06 18:36:42.569186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.022 [2024-12-06 18:36:42.569196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.022 [2024-12-06 18:36:42.569208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488100, cid 0, qid 0 00:24:48.022 [2024-12-06 18:36:42.569214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488280, cid 1, qid 0 00:24:48.022 [2024-12-06 18:36:42.569219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488400, cid 2, qid 0 00:24:48.022 [2024-12-06 18:36:42.569223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.022 [2024-12-06 18:36:42.569228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.022 [2024-12-06 18:36:42.569480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.022 [2024-12-06 18:36:42.569487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.022 [2024-12-06 18:36:42.569490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.022 [2024-12-06 18:36:42.569499] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:48.022 [2024-12-06 18:36:42.569504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.569513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.569519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.569526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569533] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.022 [2024-12-06 18:36:42.569540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:48.022 [2024-12-06 18:36:42.569551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.022 [2024-12-06 18:36:42.569730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.022 [2024-12-06 18:36:42.569737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.022 [2024-12-06 18:36:42.569741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.022 [2024-12-06 18:36:42.569812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.569822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.569830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.569834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.022 [2024-12-06 18:36:42.569842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.022 [2024-12-06 18:36:42.569854] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.022 [2024-12-06 18:36:42.570064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.022 [2024-12-06 18:36:42.570071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.022 [2024-12-06 18:36:42.570074] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570078] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=4096, cccid=4 00:24:48.022 [2024-12-06 18:36:42.570083] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488700) on tqpair(0x2426690): expected_datao=0, payload_size=4096 00:24:48.022 [2024-12-06 18:36:42.570087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570094] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570098] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.022 [2024-12-06 18:36:42.570249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.022 [2024-12-06 18:36:42.570252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.022 [2024-12-06 18:36:42.570272] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:48.022 [2024-12-06 18:36:42.570283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.570293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.570300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.022 [2024-12-06 18:36:42.570310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.022 [2024-12-06 18:36:42.570321] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.022 [2024-12-06 18:36:42.570554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.022 [2024-12-06 18:36:42.570560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.022 [2024-12-06 18:36:42.570564] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570567] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=4096, cccid=4 00:24:48.022 [2024-12-06 18:36:42.570572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488700) on tqpair(0x2426690): expected_datao=0, payload_size=4096 00:24:48.022 [2024-12-06 18:36:42.570576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.570595] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.022 [2024-12-06 18:36:42.574657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.022 [2024-12-06 18:36:42.574660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.022 [2024-12-06 18:36:42.574679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.574692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.574699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.022 [2024-12-06 18:36:42.574710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.022 [2024-12-06 18:36:42.574722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.022 [2024-12-06 18:36:42.574910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.022 [2024-12-06 18:36:42.574917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.022 [2024-12-06 18:36:42.574920] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574924] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=4096, cccid=4 00:24:48.022 [2024-12-06 18:36:42.574928] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488700) on tqpair(0x2426690): expected_datao=0, payload_size=4096 00:24:48.022 [2024-12-06 18:36:42.574932] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574949] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.574954] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.575088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.022 [2024-12-06 18:36:42.575094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.022 [2024-12-06 18:36:42.575097] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.022 [2024-12-06 18:36:42.575101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.022 [2024-12-06 18:36:42.575109] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.575118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:48.022 [2024-12-06 18:36:42.575127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:48.023 [2024-12-06 18:36:42.575135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:48.023 [2024-12-06 18:36:42.575141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:48.023 [2024-12-06 18:36:42.575146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:48.023 [2024-12-06 18:36:42.575151] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:48.023 [2024-12-06 18:36:42.575156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:48.023 [2024-12-06 18:36:42.575162] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:48.023 [2024-12-06 18:36:42.575179] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.575189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.575196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.575213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:48.023 [2024-12-06 18:36:42.575226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.023 [2024-12-06 18:36:42.575232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488880, cid 5, qid 0 00:24:48.023 [2024-12-06 18:36:42.575461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.023 [2024-12-06 18:36:42.575467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.023 [2024-12-06 18:36:42.575471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.023 [2024-12-06 18:36:42.575482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.023 [2024-12-06 18:36:42.575487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.023 [2024-12-06 18:36:42.575491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488880) on tqpair=0x2426690 00:24:48.023 [2024-12-06 18:36:42.575504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.575515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.575525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488880, cid 5, qid 0 00:24:48.023 [2024-12-06 18:36:42.575748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.023 [2024-12-06 18:36:42.575755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.023 [2024-12-06 18:36:42.575759] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488880) on tqpair=0x2426690 00:24:48.023 [2024-12-06 18:36:42.575773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575776] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.575783] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.575793] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488880, cid 5, qid 0 00:24:48.023 [2024-12-06 18:36:42.575977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.023 [2024-12-06 18:36:42.575983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.023 [2024-12-06 18:36:42.575987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.575991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488880) on tqpair=0x2426690 00:24:48.023 [2024-12-06 18:36:42.576001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.576011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.576021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488880, cid 5, qid 0 00:24:48.023 [2024-12-06 18:36:42.576226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.023 [2024-12-06 18:36:42.576233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.023 [2024-12-06 18:36:42.576236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488880) on tqpair=0x2426690 00:24:48.023 [2024-12-06 18:36:42.576262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.576273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.576281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.576291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.576298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.576308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.576316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2426690) 00:24:48.023 [2024-12-06 18:36:42.576326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.023 [2024-12-06 18:36:42.576337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488880, cid 5, qid 0 00:24:48.023 [2024-12-06 18:36:42.576343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488700, cid 4, qid 0 00:24:48.023 [2024-12-06 18:36:42.576348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488a00, cid 6, qid 0 00:24:48.023 [2024-12-06 18:36:42.576352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488b80, cid 7, qid 0 00:24:48.023 [2024-12-06 18:36:42.576667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.023 [2024-12-06 18:36:42.576674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.023 [2024-12-06 18:36:42.576677] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576681] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=8192, cccid=5 00:24:48.023 [2024-12-06 18:36:42.576685] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488880) on tqpair(0x2426690): expected_datao=0, payload_size=8192 00:24:48.023 [2024-12-06 18:36:42.576690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576791] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.023 [2024-12-06 18:36:42.576807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.023 [2024-12-06 18:36:42.576811] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576814] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=512, cccid=4 00:24:48.023 [2024-12-06 18:36:42.576819] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488700) on tqpair(0x2426690): expected_datao=0, payload_size=512 00:24:48.023 [2024-12-06 18:36:42.576823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576829] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.023 [2024-12-06 18:36:42.576833] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.024 [2024-12-06 18:36:42.576844] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.024 [2024-12-06 18:36:42.576850] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576854] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=512, cccid=6 00:24:48.024 [2024-12-06 18:36:42.576858] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488a00) on tqpair(0x2426690): expected_datao=0, payload_size=512 00:24:48.024 [2024-12-06 18:36:42.576862] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576869] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576872] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:48.024 [2024-12-06 18:36:42.576884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:48.024 [2024-12-06 18:36:42.576887] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576891] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2426690): datao=0, datal=4096, cccid=7 00:24:48.024 [2024-12-06 18:36:42.576895] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2488b80) on tqpair(0x2426690): expected_datao=0, payload_size=4096 00:24:48.024 [2024-12-06 18:36:42.576900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576916] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.576920] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.577077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.024 [2024-12-06 18:36:42.577084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.024 [2024-12-06 18:36:42.577087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.577091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488880) on tqpair=0x2426690 00:24:48.024 [2024-12-06 18:36:42.577103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.024 [2024-12-06 18:36:42.577109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.024 [2024-12-06 18:36:42.577113] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.577117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488700) on tqpair=0x2426690 00:24:48.024 [2024-12-06 18:36:42.577127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.024 [2024-12-06 18:36:42.577133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.024 [2024-12-06 18:36:42.577137] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.577141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488a00) on tqpair=0x2426690 00:24:48.024 [2024-12-06 18:36:42.577148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.024 [2024-12-06 18:36:42.577154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.024 [2024-12-06 18:36:42.577157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.024 [2024-12-06 18:36:42.577161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488b80) on tqpair=0x2426690 00:24:48.024 ===================================================== 00:24:48.024 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.024 ===================================================== 00:24:48.024 Controller Capabilities/Features 00:24:48.024 ================================ 00:24:48.024 Vendor ID: 8086 00:24:48.024 Subsystem Vendor ID: 8086 00:24:48.024 Serial Number: SPDK00000000000001 00:24:48.024 Model Number: SPDK bdev Controller 00:24:48.024 Firmware Version: 25.01 00:24:48.024 Recommended Arb Burst: 6 00:24:48.024 IEEE OUI Identifier: e4 d2 5c 00:24:48.024 Multi-path I/O 00:24:48.024 May have multiple subsystem ports: Yes 00:24:48.024 May have multiple controllers: Yes 00:24:48.024 Associated with SR-IOV VF: No 00:24:48.024 Max Data Transfer Size: 131072 00:24:48.024 Max Number of Namespaces: 32 00:24:48.024 Max Number of I/O Queues: 127 00:24:48.024 NVMe Specification Version (VS): 1.3 00:24:48.024 NVMe Specification Version (Identify): 1.3 00:24:48.024 Maximum Queue Entries: 128 00:24:48.024 Contiguous Queues Required: Yes 00:24:48.024 Arbitration Mechanisms Supported 00:24:48.024 Weighted Round Robin: Not Supported 00:24:48.024 Vendor Specific: Not Supported 00:24:48.024 Reset Timeout: 15000 ms 00:24:48.024 Doorbell Stride: 4 bytes 00:24:48.024 NVM Subsystem Reset: Not Supported 00:24:48.024 Command Sets Supported 00:24:48.024 NVM Command Set: Supported 00:24:48.024 Boot Partition: Not Supported 00:24:48.024 Memory Page Size Minimum: 4096 bytes 00:24:48.024 Memory Page Size Maximum: 4096 bytes 00:24:48.024 Persistent Memory Region: Not Supported 00:24:48.024 Optional Asynchronous Events Supported 00:24:48.024 Namespace Attribute Notices: Supported 00:24:48.024 Firmware Activation Notices: Not Supported 00:24:48.024 ANA Change Notices: Not Supported 00:24:48.024 PLE Aggregate Log Change Notices: Not Supported 00:24:48.024 LBA Status Info Alert Notices: Not Supported 00:24:48.024 EGE Aggregate Log Change Notices: Not Supported 00:24:48.024 Normal NVM Subsystem Shutdown event: Not Supported 00:24:48.024 Zone Descriptor Change Notices: Not Supported 00:24:48.024 Discovery Log Change Notices: Not Supported 00:24:48.024 Controller Attributes 00:24:48.024 128-bit Host Identifier: Supported 00:24:48.024 Non-Operational Permissive Mode: Not Supported 00:24:48.024 NVM Sets: Not Supported 00:24:48.024 Read Recovery Levels: Not Supported 00:24:48.024 Endurance Groups: Not Supported 00:24:48.024 Predictable Latency Mode: Not Supported 00:24:48.024 Traffic Based Keep ALive: Not Supported 00:24:48.024 Namespace Granularity: Not Supported 00:24:48.024 SQ Associations: Not Supported 00:24:48.024 UUID List: Not Supported 00:24:48.024 Multi-Domain Subsystem: Not Supported 00:24:48.024 Fixed Capacity Management: Not Supported 00:24:48.024 Variable Capacity Management: Not Supported 00:24:48.024 Delete Endurance Group: Not Supported 00:24:48.024 Delete NVM Set: Not Supported 00:24:48.024 Extended LBA Formats Supported: Not Supported 00:24:48.024 Flexible Data Placement Supported: Not Supported 00:24:48.024 00:24:48.024 Controller Memory Buffer Support 00:24:48.024 ================================ 00:24:48.024 Supported: No 00:24:48.024 00:24:48.024 Persistent Memory Region Support 00:24:48.024 ================================ 00:24:48.024 Supported: No 00:24:48.024 00:24:48.024 Admin Command Set Attributes 00:24:48.024 ============================ 00:24:48.024 Security Send/Receive: Not Supported 00:24:48.024 Format NVM: Not Supported 00:24:48.024 Firmware Activate/Download: Not Supported 00:24:48.024 Namespace Management: Not Supported 00:24:48.024 Device Self-Test: Not Supported 00:24:48.024 Directives: Not Supported 00:24:48.024 NVMe-MI: Not Supported 00:24:48.024 Virtualization Management: Not Supported 00:24:48.024 Doorbell Buffer Config: Not Supported 00:24:48.024 Get LBA Status Capability: Not Supported 00:24:48.024 Command & Feature Lockdown Capability: Not Supported 00:24:48.024 Abort Command Limit: 4 00:24:48.024 Async Event Request Limit: 4 00:24:48.024 Number of Firmware Slots: N/A 00:24:48.024 Firmware Slot 1 Read-Only: N/A 00:24:48.024 Firmware Activation Without Reset: N/A 00:24:48.024 Multiple Update Detection Support: N/A 00:24:48.024 Firmware Update Granularity: No Information Provided 00:24:48.024 Per-Namespace SMART Log: No 00:24:48.024 Asymmetric Namespace Access Log Page: Not Supported 00:24:48.024 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:48.024 Command Effects Log Page: Supported 00:24:48.024 Get Log Page Extended Data: Supported 00:24:48.024 Telemetry Log Pages: Not Supported 00:24:48.024 Persistent Event Log Pages: Not Supported 00:24:48.024 Supported Log Pages Log Page: May Support 00:24:48.024 Commands Supported & Effects Log Page: Not Supported 00:24:48.024 Feature Identifiers & Effects Log Page:May Support 00:24:48.024 NVMe-MI Commands & Effects Log Page: May Support 00:24:48.024 Data Area 4 for Telemetry Log: Not Supported 00:24:48.024 Error Log Page Entries Supported: 128 00:24:48.024 Keep Alive: Supported 00:24:48.024 Keep Alive Granularity: 10000 ms 00:24:48.024 00:24:48.024 NVM Command Set Attributes 00:24:48.024 ========================== 00:24:48.024 Submission Queue Entry Size 00:24:48.024 Max: 64 00:24:48.024 Min: 64 00:24:48.024 Completion Queue Entry Size 00:24:48.024 Max: 16 00:24:48.024 Min: 16 00:24:48.024 Number of Namespaces: 32 00:24:48.024 Compare Command: Supported 00:24:48.024 Write Uncorrectable Command: Not Supported 00:24:48.024 Dataset Management Command: Supported 00:24:48.024 Write Zeroes Command: Supported 00:24:48.024 Set Features Save Field: Not Supported 00:24:48.024 Reservations: Supported 00:24:48.024 Timestamp: Not Supported 00:24:48.024 Copy: Supported 00:24:48.024 Volatile Write Cache: Present 00:24:48.024 Atomic Write Unit (Normal): 1 00:24:48.024 Atomic Write Unit (PFail): 1 00:24:48.024 Atomic Compare & Write Unit: 1 00:24:48.024 Fused Compare & Write: Supported 00:24:48.024 Scatter-Gather List 00:24:48.024 SGL Command Set: Supported 00:24:48.024 SGL Keyed: Supported 00:24:48.024 SGL Bit Bucket Descriptor: Not Supported 00:24:48.025 SGL Metadata Pointer: Not Supported 00:24:48.025 Oversized SGL: Not Supported 00:24:48.025 SGL Metadata Address: Not Supported 00:24:48.025 SGL Offset: Supported 00:24:48.025 Transport SGL Data Block: Not Supported 00:24:48.025 Replay Protected Memory Block: Not Supported 00:24:48.025 00:24:48.025 Firmware Slot Information 00:24:48.025 ========================= 00:24:48.025 Active slot: 1 00:24:48.025 Slot 1 Firmware Revision: 25.01 00:24:48.025 00:24:48.025 00:24:48.025 Commands Supported and Effects 00:24:48.025 ============================== 00:24:48.025 Admin Commands 00:24:48.025 -------------- 00:24:48.025 Get Log Page (02h): Supported 00:24:48.025 Identify (06h): Supported 00:24:48.025 Abort (08h): Supported 00:24:48.025 Set Features (09h): Supported 00:24:48.025 Get Features (0Ah): Supported 00:24:48.025 Asynchronous Event Request (0Ch): Supported 00:24:48.025 Keep Alive (18h): Supported 00:24:48.025 I/O Commands 00:24:48.025 ------------ 00:24:48.025 Flush (00h): Supported LBA-Change 00:24:48.025 Write (01h): Supported LBA-Change 00:24:48.025 Read (02h): Supported 00:24:48.025 Compare (05h): Supported 00:24:48.025 Write Zeroes (08h): Supported LBA-Change 00:24:48.025 Dataset Management (09h): Supported LBA-Change 00:24:48.025 Copy (19h): Supported LBA-Change 00:24:48.025 00:24:48.025 Error Log 00:24:48.025 ========= 00:24:48.025 00:24:48.025 Arbitration 00:24:48.025 =========== 00:24:48.025 Arbitration Burst: 1 00:24:48.025 00:24:48.025 Power Management 00:24:48.025 ================ 00:24:48.025 Number of Power States: 1 00:24:48.025 Current Power State: Power State #0 00:24:48.025 Power State #0: 00:24:48.025 Max Power: 0.00 W 00:24:48.025 Non-Operational State: Operational 00:24:48.025 Entry Latency: Not Reported 00:24:48.025 Exit Latency: Not Reported 00:24:48.025 Relative Read Throughput: 0 00:24:48.025 Relative Read Latency: 0 00:24:48.025 Relative Write Throughput: 0 00:24:48.025 Relative Write Latency: 0 00:24:48.025 Idle Power: Not Reported 00:24:48.025 Active Power: Not Reported 00:24:48.025 Non-Operational Permissive Mode: Not Supported 00:24:48.025 00:24:48.025 Health Information 00:24:48.025 ================== 00:24:48.025 Critical Warnings: 00:24:48.025 Available Spare Space: OK 00:24:48.025 Temperature: OK 00:24:48.025 Device Reliability: OK 00:24:48.025 Read Only: No 00:24:48.025 Volatile Memory Backup: OK 00:24:48.025 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:48.025 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:48.025 Available Spare: 0% 00:24:48.025 Available Spare Threshold: 0% 00:24:48.025 Life Percentage Used:[2024-12-06 18:36:42.577262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2426690) 00:24:48.025 [2024-12-06 18:36:42.577274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.025 [2024-12-06 18:36:42.577286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488b80, cid 7, qid 0 00:24:48.025 [2024-12-06 18:36:42.577472] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.025 [2024-12-06 18:36:42.577478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.025 [2024-12-06 18:36:42.577482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488b80) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577525] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:48.025 [2024-12-06 18:36:42.577535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488100) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.025 [2024-12-06 18:36:42.577547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488280) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.025 [2024-12-06 18:36:42.577557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488400) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.025 [2024-12-06 18:36:42.577566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:48.025 [2024-12-06 18:36:42.577579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.025 [2024-12-06 18:36:42.577594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.025 [2024-12-06 18:36:42.577606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.025 [2024-12-06 18:36:42.577789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.025 [2024-12-06 18:36:42.577796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.025 [2024-12-06 18:36:42.577800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.577811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.577818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.025 [2024-12-06 18:36:42.577825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.025 [2024-12-06 18:36:42.577839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.025 [2024-12-06 18:36:42.578020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.025 [2024-12-06 18:36:42.578026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.025 [2024-12-06 18:36:42.578030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.578039] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:48.025 [2024-12-06 18:36:42.578044] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:48.025 [2024-12-06 18:36:42.578053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.025 [2024-12-06 18:36:42.578067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.025 [2024-12-06 18:36:42.578078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.025 [2024-12-06 18:36:42.578238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.025 [2024-12-06 18:36:42.578245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.025 [2024-12-06 18:36:42.578248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.578262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.025 [2024-12-06 18:36:42.578277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.025 [2024-12-06 18:36:42.578287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.025 [2024-12-06 18:36:42.578460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.025 [2024-12-06 18:36:42.578467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.025 [2024-12-06 18:36:42.578470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.025 [2024-12-06 18:36:42.578484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.025 [2024-12-06 18:36:42.578488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.026 [2024-12-06 18:36:42.578491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.026 [2024-12-06 18:36:42.578498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.026 [2024-12-06 18:36:42.578509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.026 [2024-12-06 18:36:42.579690] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.026 [2024-12-06 18:36:42.579700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.026 [2024-12-06 18:36:42.579704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.026 [2024-12-06 18:36:42.579708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.026 [2024-12-06 18:36:42.579719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:48.026 [2024-12-06 18:36:42.579724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:48.026 [2024-12-06 18:36:42.579727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2426690) 00:24:48.026 [2024-12-06 18:36:42.579734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.026 [2024-12-06 18:36:42.579747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2488580, cid 3, qid 0 00:24:48.026 [2024-12-06 18:36:42.579935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:48.026 [2024-12-06 18:36:42.579941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:48.026 [2024-12-06 18:36:42.579945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:48.026 [2024-12-06 18:36:42.579948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2488580) on tqpair=0x2426690 00:24:48.026 [2024-12-06 18:36:42.579957] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 1 milliseconds 00:24:48.026 0% 00:24:48.026 Data Units Read: 0 00:24:48.026 Data Units Written: 0 00:24:48.026 Host Read Commands: 0 00:24:48.026 Host Write Commands: 0 00:24:48.026 Controller Busy Time: 0 minutes 00:24:48.026 Power Cycles: 0 00:24:48.026 Power On Hours: 0 hours 00:24:48.026 Unsafe Shutdowns: 0 00:24:48.026 Unrecoverable Media Errors: 0 00:24:48.026 Lifetime Error Log Entries: 0 00:24:48.026 Warning Temperature Time: 0 minutes 00:24:48.026 Critical Temperature Time: 0 minutes 00:24:48.026 00:24:48.026 Number of Queues 00:24:48.026 ================ 00:24:48.026 Number of I/O Submission Queues: 127 00:24:48.026 Number of I/O Completion Queues: 127 00:24:48.026 00:24:48.026 Active Namespaces 00:24:48.026 ================= 00:24:48.026 Namespace ID:1 00:24:48.026 Error Recovery Timeout: Unlimited 00:24:48.026 Command Set Identifier: NVM (00h) 00:24:48.026 Deallocate: Supported 00:24:48.026 Deallocated/Unwritten Error: Not Supported 00:24:48.026 Deallocated Read Value: Unknown 00:24:48.026 Deallocate in Write Zeroes: Not Supported 00:24:48.026 Deallocated Guard Field: 0xFFFF 00:24:48.026 Flush: Supported 00:24:48.026 Reservation: Supported 00:24:48.026 Namespace Sharing Capabilities: Multiple Controllers 00:24:48.026 Size (in LBAs): 131072 (0GiB) 00:24:48.026 Capacity (in LBAs): 131072 (0GiB) 00:24:48.026 Utilization (in LBAs): 131072 (0GiB) 00:24:48.026 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:48.026 EUI64: ABCDEF0123456789 00:24:48.026 UUID: 29a83555-6a19-4f70-a5da-17d8ab7d13c1 00:24:48.026 Thin Provisioning: Not Supported 00:24:48.026 Per-NS Atomic Units: Yes 00:24:48.026 Atomic Boundary Size (Normal): 0 00:24:48.026 Atomic Boundary Size (PFail): 0 00:24:48.026 Atomic Boundary Offset: 0 00:24:48.026 Maximum Single Source Range Length: 65535 00:24:48.026 Maximum Copy Length: 65535 00:24:48.026 Maximum Source Range Count: 1 00:24:48.026 NGUID/EUI64 Never Reused: No 00:24:48.026 Namespace Write Protected: No 00:24:48.026 Number of LBA Formats: 1 00:24:48.026 Current LBA Format: LBA Format #00 00:24:48.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:48.026 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.026 rmmod nvme_tcp 00:24:48.026 rmmod nvme_fabrics 00:24:48.026 rmmod nvme_keyring 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2229942 ']' 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2229942 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2229942 ']' 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2229942 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229942 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229942' 00:24:48.026 killing process with pid 2229942 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2229942 00:24:48.026 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2229942 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.287 18:36:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.832 00:24:50.832 real 0m11.630s 00:24:50.832 user 0m8.396s 00:24:50.832 sys 0m6.132s 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.832 ************************************ 00:24:50.832 END TEST nvmf_identify 00:24:50.832 ************************************ 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.832 ************************************ 00:24:50.832 START TEST nvmf_perf 00:24:50.832 ************************************ 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:50.832 * Looking for test storage... 00:24:50.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.832 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.833 --rc genhtml_branch_coverage=1 00:24:50.833 --rc genhtml_function_coverage=1 00:24:50.833 --rc genhtml_legend=1 00:24:50.833 --rc geninfo_all_blocks=1 00:24:50.833 --rc geninfo_unexecuted_blocks=1 00:24:50.833 00:24:50.833 ' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.833 --rc genhtml_branch_coverage=1 00:24:50.833 --rc genhtml_function_coverage=1 00:24:50.833 --rc genhtml_legend=1 00:24:50.833 --rc geninfo_all_blocks=1 00:24:50.833 --rc geninfo_unexecuted_blocks=1 00:24:50.833 00:24:50.833 ' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.833 --rc genhtml_branch_coverage=1 00:24:50.833 --rc genhtml_function_coverage=1 00:24:50.833 --rc genhtml_legend=1 00:24:50.833 --rc geninfo_all_blocks=1 00:24:50.833 --rc geninfo_unexecuted_blocks=1 00:24:50.833 00:24:50.833 ' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.833 --rc genhtml_branch_coverage=1 00:24:50.833 --rc genhtml_function_coverage=1 00:24:50.833 --rc genhtml_legend=1 00:24:50.833 --rc geninfo_all_blocks=1 00:24:50.833 --rc geninfo_unexecuted_blocks=1 00:24:50.833 00:24:50.833 ' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.833 18:36:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:58.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:58.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:58.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:58.978 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:58.979 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:58.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:24:58.979 00:24:58.979 --- 10.0.0.2 ping statistics --- 00:24:58.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.979 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:58.979 00:24:58.979 --- 10.0.0.1 ping statistics --- 00:24:58.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.979 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2234444 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2234444 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2234444 ']' 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.979 18:36:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.979 [2024-12-06 18:36:52.958417] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:24:58.979 [2024-12-06 18:36:52.958489] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.979 [2024-12-06 18:36:53.060090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.979 [2024-12-06 18:36:53.113691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.979 [2024-12-06 18:36:53.113746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.979 [2024-12-06 18:36:53.113755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.979 [2024-12-06 18:36:53.113762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.979 [2024-12-06 18:36:53.113769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.979 [2024-12-06 18:36:53.115716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.979 [2024-12-06 18:36:53.115882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.979 [2024-12-06 18:36:53.116044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.979 [2024-12-06 18:36:53.116045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:59.242 18:36:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:59.816 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:59.816 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:59.816 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:59.817 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:00.079 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:00.079 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:00.079 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:00.079 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:00.079 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:00.343 [2024-12-06 18:36:54.956935] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.343 18:36:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.604 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:00.604 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.604 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:00.604 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:00.864 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.124 [2024-12-06 18:36:55.752636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.124 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:01.384 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:01.384 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:01.384 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:01.384 18:36:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:02.767 Initializing NVMe Controllers 00:25:02.767 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:02.767 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:02.767 Initialization complete. Launching workers. 00:25:02.767 ======================================================== 00:25:02.767 Latency(us) 00:25:02.767 Device Information : IOPS MiB/s Average min max 00:25:02.767 PCIE (0000:65:00.0) NSID 1 from core 0: 78807.74 307.84 405.27 13.47 5476.77 00:25:02.767 ======================================================== 00:25:02.767 Total : 78807.74 307.84 405.27 13.47 5476.77 00:25:02.767 00:25:02.767 18:36:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.152 Initializing NVMe Controllers 00:25:04.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.153 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.153 Initialization complete. Launching workers. 00:25:04.153 ======================================================== 00:25:04.153 Latency(us) 00:25:04.153 Device Information : IOPS MiB/s Average min max 00:25:04.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12611.27 256.27 44673.81 00:25:04.153 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 60.00 0.23 17292.06 7967.81 47899.53 00:25:04.153 ======================================================== 00:25:04.153 Total : 140.00 0.55 14617.32 256.27 47899.53 00:25:04.153 00:25:04.153 18:36:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.172 Initializing NVMe Controllers 00:25:05.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:05.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:05.172 Initialization complete. Launching workers. 00:25:05.172 ======================================================== 00:25:05.172 Latency(us) 00:25:05.172 Device Information : IOPS MiB/s Average min max 00:25:05.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11972.84 46.77 2673.92 461.77 6957.79 00:25:05.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3794.95 14.82 8500.51 6373.16 15964.83 00:25:05.172 ======================================================== 00:25:05.172 Total : 15767.80 61.59 4076.25 461.77 15964.83 00:25:05.172 00:25:05.172 18:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:05.172 18:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:05.172 18:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.717 Initializing NVMe Controllers 00:25:07.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.717 Controller IO queue size 128, less than required. 00:25:07.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.717 Controller IO queue size 128, less than required. 00:25:07.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:07.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:07.717 Initialization complete. Launching workers. 00:25:07.717 ======================================================== 00:25:07.717 Latency(us) 00:25:07.717 Device Information : IOPS MiB/s Average min max 00:25:07.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1879.99 470.00 68867.42 42039.71 123880.14 00:25:07.717 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.50 151.12 221804.49 63898.46 335894.56 00:25:07.717 ======================================================== 00:25:07.717 Total : 2484.48 621.12 106078.31 42039.71 335894.56 00:25:07.717 00:25:07.717 18:37:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:07.717 No valid NVMe controllers or AIO or URING devices found 00:25:07.717 Initializing NVMe Controllers 00:25:07.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.717 Controller IO queue size 128, less than required. 00:25:07.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:07.717 Controller IO queue size 128, less than required. 00:25:07.717 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:07.717 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:07.717 WARNING: Some requested NVMe devices were skipped 00:25:07.717 18:37:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:10.309 Initializing NVMe Controllers 00:25:10.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.309 Controller IO queue size 128, less than required. 00:25:10.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.309 Controller IO queue size 128, less than required. 00:25:10.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.309 Initialization complete. Launching workers. 00:25:10.309 00:25:10.309 ==================== 00:25:10.309 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:10.309 TCP transport: 00:25:10.309 polls: 39501 00:25:10.309 idle_polls: 24823 00:25:10.309 sock_completions: 14678 00:25:10.309 nvme_completions: 7139 00:25:10.309 submitted_requests: 10636 00:25:10.309 queued_requests: 1 00:25:10.309 00:25:10.309 ==================== 00:25:10.309 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:10.309 TCP transport: 00:25:10.309 polls: 38566 00:25:10.309 idle_polls: 23032 00:25:10.309 sock_completions: 15534 00:25:10.309 nvme_completions: 8085 00:25:10.309 submitted_requests: 12158 00:25:10.309 queued_requests: 1 00:25:10.309 ======================================================== 00:25:10.309 Latency(us) 00:25:10.309 Device Information : IOPS MiB/s Average min max 00:25:10.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1784.13 446.03 73364.03 38843.26 135881.51 00:25:10.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2020.58 505.14 64291.42 29286.32 114908.26 00:25:10.309 ======================================================== 00:25:10.309 Total : 3804.71 951.18 68545.80 29286.32 135881.51 00:25:10.309 00:25:10.309 18:37:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:10.309 18:37:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.569 rmmod nvme_tcp 00:25:10.569 rmmod nvme_fabrics 00:25:10.569 rmmod nvme_keyring 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2234444 ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2234444 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2234444 ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2234444 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2234444 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2234444' 00:25:10.569 killing process with pid 2234444 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2234444 00:25:10.569 18:37:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2234444 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.476 18:37:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:15.020 00:25:15.020 real 0m24.206s 00:25:15.020 user 0m58.141s 00:25:15.020 sys 0m8.602s 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:15.020 ************************************ 00:25:15.020 END TEST nvmf_perf 00:25:15.020 ************************************ 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.020 ************************************ 00:25:15.020 START TEST nvmf_fio_host 00:25:15.020 ************************************ 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:15.020 * Looking for test storage... 00:25:15.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.020 --rc genhtml_branch_coverage=1 00:25:15.020 --rc genhtml_function_coverage=1 00:25:15.020 --rc genhtml_legend=1 00:25:15.020 --rc geninfo_all_blocks=1 00:25:15.020 --rc geninfo_unexecuted_blocks=1 00:25:15.020 00:25:15.020 ' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.020 --rc genhtml_branch_coverage=1 00:25:15.020 --rc genhtml_function_coverage=1 00:25:15.020 --rc genhtml_legend=1 00:25:15.020 --rc geninfo_all_blocks=1 00:25:15.020 --rc geninfo_unexecuted_blocks=1 00:25:15.020 00:25:15.020 ' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.020 --rc genhtml_branch_coverage=1 00:25:15.020 --rc genhtml_function_coverage=1 00:25:15.020 --rc genhtml_legend=1 00:25:15.020 --rc geninfo_all_blocks=1 00:25:15.020 --rc geninfo_unexecuted_blocks=1 00:25:15.020 00:25:15.020 ' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.020 --rc genhtml_branch_coverage=1 00:25:15.020 --rc genhtml_function_coverage=1 00:25:15.020 --rc genhtml_legend=1 00:25:15.020 --rc geninfo_all_blocks=1 00:25:15.020 --rc geninfo_unexecuted_blocks=1 00:25:15.020 00:25:15.020 ' 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.020 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.021 18:37:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:23.169 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:23.169 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:23.169 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:23.169 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.169 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:23.170 18:37:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:23.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:25:23.170 00:25:23.170 --- 10.0.0.2 ping statistics --- 00:25:23.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.170 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:25:23.170 00:25:23.170 --- 10.0.0.1 ping statistics --- 00:25:23.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.170 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2241368 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2241368 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2241368 ']' 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.170 18:37:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.170 [2024-12-06 18:37:17.181886] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:25:23.170 [2024-12-06 18:37:17.181956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.170 [2024-12-06 18:37:17.279206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.170 [2024-12-06 18:37:17.332375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.170 [2024-12-06 18:37:17.332429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.170 [2024-12-06 18:37:17.332438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.170 [2024-12-06 18:37:17.332450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.170 [2024-12-06 18:37:17.332456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.170 [2024-12-06 18:37:17.334476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.170 [2024-12-06 18:37:17.334636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.170 [2024-12-06 18:37:17.334796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.170 [2024-12-06 18:37:17.334891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:23.431 [2024-12-06 18:37:18.169066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.431 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.691 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:23.691 Malloc1 00:25:23.952 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.952 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:24.213 18:37:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.474 [2024-12-06 18:37:19.021251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:24.474 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:24.785 18:37:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:25.044 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:25.044 fio-3.35 00:25:25.044 Starting 1 thread 00:25:27.617 00:25:27.617 test: (groupid=0, jobs=1): err= 0: pid=2242198: Fri Dec 6 18:37:22 2024 00:25:27.617 read: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec) 00:25:27.617 slat (usec): min=2, max=279, avg= 2.17, stdev= 2.26 00:25:27.617 clat (usec): min=3268, max=8982, avg=5087.36, stdev=381.02 00:25:27.617 lat (usec): min=3271, max=8984, avg=5089.53, stdev=381.17 00:25:27.617 clat percentiles (usec): 00:25:27.617 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4621], 20.00th=[ 4817], 00:25:27.617 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:25:27.617 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:25:27.617 | 99.00th=[ 5932], 99.50th=[ 6325], 99.90th=[ 8356], 99.95th=[ 8586], 00:25:27.617 | 99.99th=[ 8848] 00:25:27.617 bw ( KiB/s): min=53920, max=55984, per=99.99%, avg=55298.00, stdev=934.61, samples=4 00:25:27.617 iops : min=13480, max=13996, avg=13824.50, stdev=233.65, samples=4 00:25:27.617 write: IOPS=13.8k, BW=54.0MiB/s (56.6MB/s)(108MiB/2004msec); 0 zone resets 00:25:27.617 slat (usec): min=2, max=250, avg= 2.23, stdev= 1.68 00:25:27.617 clat (usec): min=2723, max=7660, avg=4126.15, stdev=335.89 00:25:27.617 lat (usec): min=2726, max=7663, avg=4128.38, stdev=336.13 00:25:27.617 clat percentiles (usec): 00:25:27.617 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 3884], 00:25:27.617 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:25:27.617 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:27.617 | 99.00th=[ 4948], 99.50th=[ 5866], 99.90th=[ 6915], 99.95th=[ 7111], 00:25:27.617 | 99.99th=[ 7635] 00:25:27.617 bw ( KiB/s): min=54256, max=55704, per=99.95%, avg=55260.00, stdev=675.12, samples=4 00:25:27.617 iops : min=13564, max=13926, avg=13815.00, stdev=168.78, samples=4 00:25:27.617 lat (msec) : 4=16.77%, 10=83.23% 00:25:27.617 cpu : usr=75.29%, sys=23.51%, ctx=28, majf=0, minf=16 00:25:27.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:27.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:27.617 issued rwts: total=27707,27699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:27.617 00:25:27.617 Run status group 0 (all jobs): 00:25:27.617 READ: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:25:27.617 WRITE: bw=54.0MiB/s (56.6MB/s), 54.0MiB/s-54.0MiB/s (56.6MB/s-56.6MB/s), io=108MiB (113MB), run=2004-2004msec 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:27.617 18:37:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.887 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:27.887 fio-3.35 00:25:27.887 Starting 1 thread 00:25:30.433 00:25:30.433 test: (groupid=0, jobs=1): err= 0: pid=2242726: Fri Dec 6 18:37:24 2024 00:25:30.433 read: IOPS=9375, BW=146MiB/s (154MB/s)(294MiB/2006msec) 00:25:30.433 slat (usec): min=3, max=110, avg= 3.64, stdev= 1.65 00:25:30.433 clat (usec): min=1461, max=54236, avg=8467.88, stdev=3963.91 00:25:30.433 lat (usec): min=1465, max=54240, avg=8471.52, stdev=3964.00 00:25:30.433 clat percentiles (usec): 00:25:30.433 | 1.00th=[ 4047], 5.00th=[ 5211], 10.00th=[ 5735], 20.00th=[ 6390], 00:25:30.433 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8094], 60.00th=[ 8717], 00:25:30.433 | 70.00th=[ 9372], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:25:30.433 | 99.00th=[14615], 99.50th=[48497], 99.90th=[53216], 99.95th=[53740], 00:25:30.433 | 99.99th=[54264] 00:25:30.433 bw ( KiB/s): min=62496, max=87392, per=49.09%, avg=73640.00, stdev=10278.34, samples=4 00:25:30.433 iops : min= 3906, max= 5462, avg=4603.00, stdev=642.29, samples=4 00:25:30.433 write: IOPS=5666, BW=88.5MiB/s (92.8MB/s)(150MiB/1698msec); 0 zone resets 00:25:30.433 slat (usec): min=39, max=447, avg=41.25, stdev= 9.25 00:25:30.433 clat (usec): min=2274, max=17141, avg=9116.43, stdev=1466.18 00:25:30.433 lat (usec): min=2314, max=17278, avg=9157.68, stdev=1469.67 00:25:30.433 clat percentiles (usec): 00:25:30.433 | 1.00th=[ 6128], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7898], 00:25:30.433 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:25:30.433 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11469], 00:25:30.433 | 99.00th=[13042], 99.50th=[15926], 99.90th=[16581], 99.95th=[16712], 00:25:30.433 | 99.99th=[17171] 00:25:30.433 bw ( KiB/s): min=63840, max=91072, per=84.62%, avg=76712.00, stdev=11159.40, samples=4 00:25:30.433 iops : min= 3990, max= 5692, avg=4794.50, stdev=697.46, samples=4 00:25:30.433 lat (msec) : 2=0.04%, 4=0.68%, 10=76.16%, 20=22.69%, 50=0.20% 00:25:30.433 lat (msec) : 100=0.25% 00:25:30.433 cpu : usr=84.59%, sys=14.06%, ctx=16, majf=0, minf=30 00:25:30.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:30.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:30.433 issued rwts: total=18807,9621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:30.433 00:25:30.433 Run status group 0 (all jobs): 00:25:30.433 READ: bw=146MiB/s (154MB/s), 146MiB/s-146MiB/s (154MB/s-154MB/s), io=294MiB (308MB), run=2006-2006msec 00:25:30.433 WRITE: bw=88.5MiB/s (92.8MB/s), 88.5MiB/s-88.5MiB/s (92.8MB/s-92.8MB/s), io=150MiB (158MB), run=1698-1698msec 00:25:30.433 18:37:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.433 rmmod nvme_tcp 00:25:30.433 rmmod nvme_fabrics 00:25:30.433 rmmod nvme_keyring 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2241368 ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2241368 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2241368 ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2241368 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2241368 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2241368' 00:25:30.433 killing process with pid 2241368 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2241368 00:25:30.433 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2241368 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.692 18:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.597 18:37:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:32.858 00:25:32.858 real 0m17.988s 00:25:32.858 user 1m0.897s 00:25:32.858 sys 0m7.851s 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.858 ************************************ 00:25:32.858 END TEST nvmf_fio_host 00:25:32.858 ************************************ 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.858 ************************************ 00:25:32.858 START TEST nvmf_failover 00:25:32.858 ************************************ 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:32.858 * Looking for test storage... 00:25:32.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.858 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:33.119 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:33.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.120 --rc genhtml_branch_coverage=1 00:25:33.120 --rc genhtml_function_coverage=1 00:25:33.120 --rc genhtml_legend=1 00:25:33.120 --rc geninfo_all_blocks=1 00:25:33.120 --rc geninfo_unexecuted_blocks=1 00:25:33.120 00:25:33.120 ' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:33.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.120 --rc genhtml_branch_coverage=1 00:25:33.120 --rc genhtml_function_coverage=1 00:25:33.120 --rc genhtml_legend=1 00:25:33.120 --rc geninfo_all_blocks=1 00:25:33.120 --rc geninfo_unexecuted_blocks=1 00:25:33.120 00:25:33.120 ' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:33.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.120 --rc genhtml_branch_coverage=1 00:25:33.120 --rc genhtml_function_coverage=1 00:25:33.120 --rc genhtml_legend=1 00:25:33.120 --rc geninfo_all_blocks=1 00:25:33.120 --rc geninfo_unexecuted_blocks=1 00:25:33.120 00:25:33.120 ' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:33.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.120 --rc genhtml_branch_coverage=1 00:25:33.120 --rc genhtml_function_coverage=1 00:25:33.120 --rc genhtml_legend=1 00:25:33.120 --rc geninfo_all_blocks=1 00:25:33.120 --rc geninfo_unexecuted_blocks=1 00:25:33.120 00:25:33.120 ' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:33.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:33.120 18:37:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:41.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:41.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:41.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.262 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:41.263 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.263 18:37:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:41.263 00:25:41.263 --- 10.0.0.2 ping statistics --- 00:25:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.263 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:25:41.263 00:25:41.263 --- 10.0.0.1 ping statistics --- 00:25:41.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.263 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2247385 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2247385 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2247385 ']' 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.263 18:37:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.263 [2024-12-06 18:37:35.227985] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:25:41.263 [2024-12-06 18:37:35.228043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.263 [2024-12-06 18:37:35.328087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:41.263 [2024-12-06 18:37:35.379924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.263 [2024-12-06 18:37:35.379975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.263 [2024-12-06 18:37:35.379984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.263 [2024-12-06 18:37:35.379991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.263 [2024-12-06 18:37:35.379998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.263 [2024-12-06 18:37:35.381818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.263 [2024-12-06 18:37:35.382111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.263 [2024-12-06 18:37:35.382110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.524 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:41.524 [2024-12-06 18:37:36.255613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.525 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:41.785 Malloc0 00:25:41.785 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.045 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.306 18:37:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.306 [2024-12-06 18:37:37.071646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.566 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.566 [2024-12-06 18:37:37.260027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.566 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:42.826 [2024-12-06 18:37:37.444551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2247934 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2247934 /var/tmp/bdevperf.sock 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2247934 ']' 00:25:42.826 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:42.827 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.827 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:42.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:42.827 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.827 18:37:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:43.769 18:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.769 18:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:43.769 18:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:44.030 NVMe0n1 00:25:44.030 18:37:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:44.291 00:25:44.291 18:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2248202 00:25:44.291 18:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:44.291 18:37:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:45.679 18:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.679 [2024-12-06 18:37:40.217350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 [2024-12-06 18:37:40.217516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d68ed0 is same with the state(6) to be set 00:25:45.679 18:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:48.985 18:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:48.985 00:25:48.985 18:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:49.246 [2024-12-06 18:37:43.794738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 [2024-12-06 18:37:43.794822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d69980 is same with the state(6) to be set 00:25:49.246 18:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:52.544 18:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.544 [2024-12-06 18:37:46.981445] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.544 18:37:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:53.488 18:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:53.488 [2024-12-06 18:37:48.168864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.168999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.488 [2024-12-06 18:37:48.169171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 [2024-12-06 18:37:48.169516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f140 is same with the state(6) to be set 00:25:53.489 18:37:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2248202 00:26:00.077 { 00:26:00.077 "results": [ 00:26:00.077 { 00:26:00.077 "job": "NVMe0n1", 00:26:00.077 "core_mask": "0x1", 00:26:00.077 "workload": "verify", 00:26:00.077 "status": "finished", 00:26:00.077 "verify_range": { 00:26:00.077 "start": 0, 00:26:00.077 "length": 16384 00:26:00.077 }, 00:26:00.077 "queue_depth": 128, 00:26:00.077 "io_size": 4096, 00:26:00.077 "runtime": 15.006994, 00:26:00.077 "iops": 12486.711196126285, 00:26:00.077 "mibps": 48.7762156098683, 00:26:00.077 "io_failed": 9837, 00:26:00.077 "io_timeout": 0, 00:26:00.077 "avg_latency_us": 9718.925690759286, 00:26:00.077 "min_latency_us": 539.3066666666666, 00:26:00.077 "max_latency_us": 23920.64 00:26:00.077 } 00:26:00.077 ], 00:26:00.077 "core_count": 1 00:26:00.077 } 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2247934 ']' 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247934' 00:26:00.077 killing process with pid 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2247934 00:26:00.077 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:00.077 [2024-12-06 18:37:37.525965] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:26:00.077 [2024-12-06 18:37:37.526023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2247934 ] 00:26:00.077 [2024-12-06 18:37:37.612709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.077 [2024-12-06 18:37:37.648426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.077 Running I/O for 15 seconds... 00:26:00.077 11082.00 IOPS, 43.29 MiB/s [2024-12-06T17:37:54.861Z] [2024-12-06 18:37:40.218523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.077 [2024-12-06 18:37:40.218555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.077 [2024-12-06 18:37:40.218566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.077 [2024-12-06 18:37:40.218574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.077 [2024-12-06 18:37:40.218583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.077 [2024-12-06 18:37:40.218590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.077 [2024-12-06 18:37:40.218600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.077 [2024-12-06 18:37:40.218607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.077 [2024-12-06 18:37:40.218615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5be9d0 is same with the state(6) to be set 00:26:00.077 [2024-12-06 18:37:40.218681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.077 [2024-12-06 18:37:40.218692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.077 [2024-12-06 18:37:40.218707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.077 [2024-12-06 18:37:40.218716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:94760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.218989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.218997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:94872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:94912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.078 [2024-12-06 18:37:40.219262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.078 [2024-12-06 18:37:40.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.078 [2024-12-06 18:37:40.219415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.079 [2024-12-06 18:37:40.219971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.219988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.219998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.079 [2024-12-06 18:37:40.220101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.079 [2024-12-06 18:37:40.220108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.080 [2024-12-06 18:37:40.220787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.080 [2024-12-06 18:37:40.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:40.220907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.081 [2024-12-06 18:37:40.220934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.081 [2024-12-06 18:37:40.220941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95688 len:8 PRP1 0x0 PRP2 0x0 00:26:00.081 [2024-12-06 18:37:40.220949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:40.220989] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:00.081 [2024-12-06 18:37:40.220999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:00.081 [2024-12-06 18:37:40.224553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:00.081 [2024-12-06 18:37:40.224576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5be9d0 (9): Bad file descriptor 00:26:00.081 [2024-12-06 18:37:40.336865] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:00.081 10849.00 IOPS, 42.38 MiB/s [2024-12-06T17:37:54.865Z] 11159.33 IOPS, 43.59 MiB/s [2024-12-06T17:37:54.865Z] 11603.25 IOPS, 45.33 MiB/s [2024-12-06T17:37:54.865Z] [2024-12-06 18:37:43.794297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.081 [2024-12-06 18:37:43.794337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.794345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.081 [2024-12-06 18:37:43.794356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.794362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.081 [2024-12-06 18:37:43.794368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.794373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.081 [2024-12-06 18:37:43.794379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.794384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5be9d0 is same with the state(6) to be set 00:26:00.081 [2024-12-06 18:37:43.795913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.795941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.795954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.795967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.795979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.795992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.795997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.081 [2024-12-06 18:37:43.796097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.081 [2024-12-06 18:37:43.796204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.081 [2024-12-06 18:37:43.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.082 [2024-12-06 18:37:43.796503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.082 [2024-12-06 18:37:43.796581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.082 [2024-12-06 18:37:43.796586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.083 [2024-12-06 18:37:43.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.796990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.796995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.797007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.797030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.797041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.083 [2024-12-06 18:37:43.797053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.083 [2024-12-06 18:37:43.797060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.084 [2024-12-06 18:37:43.797205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80392 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80408 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80416 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80424 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79584 len:8 PRP1 0x0 PRP2 0x0 00:26:00.084 [2024-12-06 18:37:43.797527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.084 [2024-12-06 18:37:43.797533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.084 [2024-12-06 18:37:43.797538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.084 [2024-12-06 18:37:43.797542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79592 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.085 [2024-12-06 18:37:43.797556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.085 [2024-12-06 18:37:43.797561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79600 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.085 [2024-12-06 18:37:43.797575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.085 [2024-12-06 18:37:43.797581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79608 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.085 [2024-12-06 18:37:43.797596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.085 [2024-12-06 18:37:43.797601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79616 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.085 [2024-12-06 18:37:43.797614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.085 [2024-12-06 18:37:43.797619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79624 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.085 [2024-12-06 18:37:43.797635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.085 [2024-12-06 18:37:43.797644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79632 len:8 PRP1 0x0 PRP2 0x0 00:26:00.085 [2024-12-06 18:37:43.797649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:43.797683] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:00.085 [2024-12-06 18:37:43.797690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:00.085 [2024-12-06 18:37:43.800120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:00.085 [2024-12-06 18:37:43.800140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5be9d0 (9): Bad file descriptor 00:26:00.085 [2024-12-06 18:37:43.871873] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:00.085 11666.60 IOPS, 45.57 MiB/s [2024-12-06T17:37:54.869Z] 11892.83 IOPS, 46.46 MiB/s [2024-12-06T17:37:54.869Z] 12044.14 IOPS, 47.05 MiB/s [2024-12-06T17:37:54.869Z] 12140.62 IOPS, 47.42 MiB/s [2024-12-06T17:37:54.869Z] [2024-12-06 18:37:48.171583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:00.085 [2024-12-06 18:37:48.171615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.085 [2024-12-06 18:37:48.171945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.085 [2024-12-06 18:37:48.171951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.171957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.171962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.171968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:31008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.171975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.171981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.171986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.171993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.171998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:31032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:31112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:31208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:31224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.086 [2024-12-06 18:37:48.172354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.086 [2024-12-06 18:37:48.172360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:31272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.087 [2024-12-06 18:37:48.172366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.087 [2024-12-06 18:37:48.172378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.087 [2024-12-06 18:37:48.172389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:31296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.087 [2024-12-06 18:37:48.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.087 [2024-12-06 18:37:48.172414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31312 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31320 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31328 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31336 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31344 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31352 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31360 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31368 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31376 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31384 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31392 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31400 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31408 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31416 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31424 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31432 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31440 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31448 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31456 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30744 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.087 [2024-12-06 18:37:48.172827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.087 [2024-12-06 18:37:48.172832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.087 [2024-12-06 18:37:48.172836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30752 len:8 PRP1 0x0 PRP2 0x0 00:26:00.087 [2024-12-06 18:37:48.172841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30760 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30768 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30776 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31464 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31472 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31480 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31488 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.172982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.172986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.172990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31496 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.172995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31504 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31512 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31520 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31528 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31536 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31544 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.173126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31552 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.173131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.173137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.173141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31560 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31568 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31576 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31584 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31592 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31600 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31608 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.088 [2024-12-06 18:37:48.186587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31616 len:8 PRP1 0x0 PRP2 0x0 00:26:00.088 [2024-12-06 18:37:48.186594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.088 [2024-12-06 18:37:48.186602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.088 [2024-12-06 18:37:48.186607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31624 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31632 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31640 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31648 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31656 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31664 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31672 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31680 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31688 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31696 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31704 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31712 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31720 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31728 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.186974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.186979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.186985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31736 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.186992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.187005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.187011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31744 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.187018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:00.089 [2024-12-06 18:37:48.187030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:00.089 [2024-12-06 18:37:48.187036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31752 len:8 PRP1 0x0 PRP2 0x0 00:26:00.089 [2024-12-06 18:37:48.187043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187085] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:00.089 [2024-12-06 18:37:48.187113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.089 [2024-12-06 18:37:48.187121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.089 [2024-12-06 18:37:48.187143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.089 [2024-12-06 18:37:48.187159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.089 [2024-12-06 18:37:48.187174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.089 [2024-12-06 18:37:48.187181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:00.089 [2024-12-06 18:37:48.187222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5be9d0 (9): Bad file descriptor 00:26:00.089 [2024-12-06 18:37:48.192025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:00.089 12221.78 IOPS, 47.74 MiB/s [2024-12-06T17:37:54.873Z] [2024-12-06 18:37:48.218583] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:00.089 12271.30 IOPS, 47.93 MiB/s [2024-12-06T17:37:54.873Z] 12324.64 IOPS, 48.14 MiB/s [2024-12-06T17:37:54.873Z] 12373.25 IOPS, 48.33 MiB/s [2024-12-06T17:37:54.873Z] 12433.23 IOPS, 48.57 MiB/s [2024-12-06T17:37:54.873Z] 12480.64 IOPS, 48.75 MiB/s 00:26:00.089 Latency(us) 00:26:00.089 [2024-12-06T17:37:54.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.089 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:00.089 Verification LBA range: start 0x0 length 0x4000 00:26:00.089 NVMe0n1 : 15.01 12486.71 48.78 655.49 0.00 9718.93 539.31 23920.64 00:26:00.089 [2024-12-06T17:37:54.873Z] =================================================================================================================== 00:26:00.089 [2024-12-06T17:37:54.873Z] Total : 12486.71 48.78 655.49 0.00 9718.93 539.31 23920.64 00:26:00.089 Received shutdown signal, test time was about 15.000000 seconds 00:26:00.089 00:26:00.089 Latency(us) 00:26:00.089 [2024-12-06T17:37:54.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:00.089 [2024-12-06T17:37:54.873Z] =================================================================================================================== 00:26:00.090 [2024-12-06T17:37:54.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2251105 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2251105 /var/tmp/bdevperf.sock 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2251105 ']' 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:00.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.090 18:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:00.660 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.660 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:00.660 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:00.660 [2024-12-06 18:37:55.393729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:00.660 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.920 [2024-12-06 18:37:55.570157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.920 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:01.180 NVMe0n1 00:26:01.440 18:37:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:01.700 00:26:01.700 18:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:01.700 00:26:01.961 18:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.961 18:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:01.961 18:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:02.222 18:37:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:05.524 18:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.524 18:37:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:05.524 18:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2252319 00:26:05.524 18:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.524 18:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2252319 00:26:06.467 { 00:26:06.467 "results": [ 00:26:06.467 { 00:26:06.467 "job": "NVMe0n1", 00:26:06.467 "core_mask": "0x1", 00:26:06.468 "workload": "verify", 00:26:06.468 "status": "finished", 00:26:06.468 "verify_range": { 00:26:06.468 "start": 0, 00:26:06.468 "length": 16384 00:26:06.468 }, 00:26:06.468 "queue_depth": 128, 00:26:06.468 "io_size": 4096, 00:26:06.468 "runtime": 1.002882, 00:26:06.468 "iops": 12837.003755177579, 00:26:06.468 "mibps": 50.144545918662416, 00:26:06.468 "io_failed": 0, 00:26:06.468 "io_timeout": 0, 00:26:06.468 "avg_latency_us": 9935.91823934545, 00:26:06.468 "min_latency_us": 1549.6533333333334, 00:26:06.468 "max_latency_us": 14199.466666666667 00:26:06.468 } 00:26:06.468 ], 00:26:06.468 "core_count": 1 00:26:06.468 } 00:26:06.468 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.468 [2024-12-06 18:37:54.446143] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:26:06.468 [2024-12-06 18:37:54.446199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251105 ] 00:26:06.468 [2024-12-06 18:37:54.529346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.468 [2024-12-06 18:37:54.557163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.468 [2024-12-06 18:37:56.838985] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:06.468 [2024-12-06 18:37:56.839023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.468 [2024-12-06 18:37:56.839031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.468 [2024-12-06 18:37:56.839038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.468 [2024-12-06 18:37:56.839044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.468 [2024-12-06 18:37:56.839049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.468 [2024-12-06 18:37:56.839054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.468 [2024-12-06 18:37:56.839060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:06.468 [2024-12-06 18:37:56.839065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.468 [2024-12-06 18:37:56.839074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:06.468 [2024-12-06 18:37:56.839096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:06.468 [2024-12-06 18:37:56.839108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec39d0 (9): Bad file descriptor 00:26:06.468 [2024-12-06 18:37:56.844125] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:06.468 Running I/O for 1 seconds... 00:26:06.468 12746.00 IOPS, 49.79 MiB/s 00:26:06.468 Latency(us) 00:26:06.468 [2024-12-06T17:38:01.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.468 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:06.468 Verification LBA range: start 0x0 length 0x4000 00:26:06.468 NVMe0n1 : 1.00 12837.00 50.14 0.00 0.00 9935.92 1549.65 14199.47 00:26:06.468 [2024-12-06T17:38:01.252Z] =================================================================================================================== 00:26:06.468 [2024-12-06T17:38:01.252Z] Total : 12837.00 50.14 0.00 0.00 9935.92 1549.65 14199.47 00:26:06.468 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.468 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:06.730 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.991 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.991 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:06.991 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:07.251 18:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:10.551 18:38:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.551 18:38:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2251105 ']' 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2251105' 00:26:10.551 killing process with pid 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2251105 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:10.551 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.810 rmmod nvme_tcp 00:26:10.810 rmmod nvme_fabrics 00:26:10.810 rmmod nvme_keyring 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2247385 ']' 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2247385 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2247385 ']' 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2247385 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.810 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2247385 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2247385' 00:26:11.070 killing process with pid 2247385 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2247385 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2247385 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.070 18:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.704 00:26:13.704 real 0m40.349s 00:26:13.704 user 2m4.136s 00:26:13.704 sys 0m8.759s 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.704 ************************************ 00:26:13.704 END TEST nvmf_failover 00:26:13.704 ************************************ 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.704 ************************************ 00:26:13.704 START TEST nvmf_host_discovery 00:26:13.704 ************************************ 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.704 * Looking for test storage... 00:26:13.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:26:13.704 18:38:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.704 --rc genhtml_branch_coverage=1 00:26:13.704 --rc genhtml_function_coverage=1 00:26:13.704 --rc genhtml_legend=1 00:26:13.704 --rc geninfo_all_blocks=1 00:26:13.704 --rc geninfo_unexecuted_blocks=1 00:26:13.704 00:26:13.704 ' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.704 --rc genhtml_branch_coverage=1 00:26:13.704 --rc genhtml_function_coverage=1 00:26:13.704 --rc genhtml_legend=1 00:26:13.704 --rc geninfo_all_blocks=1 00:26:13.704 --rc geninfo_unexecuted_blocks=1 00:26:13.704 00:26:13.704 ' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.704 --rc genhtml_branch_coverage=1 00:26:13.704 --rc genhtml_function_coverage=1 00:26:13.704 --rc genhtml_legend=1 00:26:13.704 --rc geninfo_all_blocks=1 00:26:13.704 --rc geninfo_unexecuted_blocks=1 00:26:13.704 00:26:13.704 ' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:13.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.704 --rc genhtml_branch_coverage=1 00:26:13.704 --rc genhtml_function_coverage=1 00:26:13.704 --rc genhtml_legend=1 00:26:13.704 --rc geninfo_all_blocks=1 00:26:13.704 --rc geninfo_unexecuted_blocks=1 00:26:13.704 00:26:13.704 ' 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.704 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.705 18:38:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.903 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:21.904 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:21.904 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:21.904 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:21.904 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:21.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:26:21.904 00:26:21.904 --- 10.0.0.2 ping statistics --- 00:26:21.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.904 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:26:21.904 00:26:21.904 --- 10.0.0.1 ping statistics --- 00:26:21.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.904 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2257483 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2257483 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2257483 ']' 00:26:21.904 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.905 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.905 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.905 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.905 18:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 [2024-12-06 18:38:15.694912] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:26:21.905 [2024-12-06 18:38:15.694982] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.905 [2024-12-06 18:38:15.792677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.905 [2024-12-06 18:38:15.842469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.905 [2024-12-06 18:38:15.842522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.905 [2024-12-06 18:38:15.842531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.905 [2024-12-06 18:38:15.842539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.905 [2024-12-06 18:38:15.842546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.905 [2024-12-06 18:38:15.843302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 [2024-12-06 18:38:16.566605] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 [2024-12-06 18:38:16.578846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 null0 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 null1 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2257811 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2257811 /tmp/host.sock 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2257811 ']' 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:21.905 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.905 18:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.905 [2024-12-06 18:38:16.682587] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:26:21.905 [2024-12-06 18:38:16.682667] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2257811 ] 00:26:22.166 [2024-12-06 18:38:16.773954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.166 [2024-12-06 18:38:16.827229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.739 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.001 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.002 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.263 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:23.263 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:23.263 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.263 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 [2024-12-06 18:38:17.878194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 18:38:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:23.264 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:23.525 18:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:24.097 [2024-12-06 18:38:18.575565] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:24.097 [2024-12-06 18:38:18.575585] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:24.097 [2024-12-06 18:38:18.575598] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.097 [2024-12-06 18:38:18.663875] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:24.097 [2024-12-06 18:38:18.723635] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:24.097 [2024-12-06 18:38:18.724599] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x658320:1 started. 00:26:24.097 [2024-12-06 18:38:18.726210] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:24.097 [2024-12-06 18:38:18.726229] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.097 [2024-12-06 18:38:18.734112] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x658320 was disconnected and freed. delete nvme_qpair. 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.358 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.621 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.883 [2024-12-06 18:38:19.566816] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6586a0:1 started. 00:26:24.883 [2024-12-06 18:38:19.576134] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6586a0 was disconnected and freed. delete nvme_qpair. 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.883 [2024-12-06 18:38:19.658849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.883 [2024-12-06 18:38:19.659503] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:24.883 [2024-12-06 18:38:19.659525] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:24.883 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:25.143 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.144 [2024-12-06 18:38:19.748229] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:25.144 18:38:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:25.404 [2024-12-06 18:38:20.054828] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:25.404 [2024-12-06 18:38:20.054873] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:25.404 [2024-12-06 18:38:20.054883] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:25.404 [2024-12-06 18:38:20.054888] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.348 [2024-12-06 18:38:20.922413] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:26.348 [2024-12-06 18:38:20.922431] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:26.348 [2024-12-06 18:38:20.931732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.348 [2024-12-06 18:38:20.931749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.348 [2024-12-06 18:38:20.931756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.348 [2024-12-06 18:38:20.931761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.348 [2024-12-06 18:38:20.931767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.348 [2024-12-06 18:38:20.931772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.348 [2024-12-06 18:38:20.931778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.348 [2024-12-06 18:38:20.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.348 [2024-12-06 18:38:20.931788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.348 [2024-12-06 18:38:20.941746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.348 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.348 [2024-12-06 18:38:20.951780] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.348 [2024-12-06 18:38:20.951788] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.348 [2024-12-06 18:38:20.951794] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.348 [2024-12-06 18:38:20.951798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.348 [2024-12-06 18:38:20.951812] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.348 [2024-12-06 18:38:20.952019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-12-06 18:38:20.952031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.348 [2024-12-06 18:38:20.952037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.348 [2024-12-06 18:38:20.952047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.348 [2024-12-06 18:38:20.952055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.348 [2024-12-06 18:38:20.952060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.348 [2024-12-06 18:38:20.952066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.348 [2024-12-06 18:38:20.952071] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.348 [2024-12-06 18:38:20.952075] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.348 [2024-12-06 18:38:20.952079] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.348 [2024-12-06 18:38:20.961840] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.348 [2024-12-06 18:38:20.961848] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.348 [2024-12-06 18:38:20.961851] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.348 [2024-12-06 18:38:20.961855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.348 [2024-12-06 18:38:20.961865] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.348 [2024-12-06 18:38:20.962174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.348 [2024-12-06 18:38:20.962183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.348 [2024-12-06 18:38:20.962188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.348 [2024-12-06 18:38:20.962199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.349 [2024-12-06 18:38:20.962206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.349 [2024-12-06 18:38:20.962211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.349 [2024-12-06 18:38:20.962216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.349 [2024-12-06 18:38:20.962221] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.349 [2024-12-06 18:38:20.962224] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.349 [2024-12-06 18:38:20.962227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.349 [2024-12-06 18:38:20.971895] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.349 [2024-12-06 18:38:20.971905] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.349 [2024-12-06 18:38:20.971908] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.971911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.349 [2024-12-06 18:38:20.971923] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.972233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-12-06 18:38:20.972243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.349 [2024-12-06 18:38:20.972248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.349 [2024-12-06 18:38:20.972256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.349 [2024-12-06 18:38:20.972264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.349 [2024-12-06 18:38:20.972268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.349 [2024-12-06 18:38:20.972273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.349 [2024-12-06 18:38:20.972278] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.349 [2024-12-06 18:38:20.972281] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.349 [2024-12-06 18:38:20.972284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:26.349 [2024-12-06 18:38:20.981952] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.349 [2024-12-06 18:38:20.981968] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.349 [2024-12-06 18:38:20.981971] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.981974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.349 [2024-12-06 18:38:20.981985] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.982275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-12-06 18:38:20.982285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.349 [2024-12-06 18:38:20.982290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.349 [2024-12-06 18:38:20.982298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.349 [2024-12-06 18:38:20.982305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.349 [2024-12-06 18:38:20.982310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.349 [2024-12-06 18:38:20.982315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.349 [2024-12-06 18:38:20.982319] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.349 [2024-12-06 18:38:20.982322] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.349 [2024-12-06 18:38:20.982325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.349 18:38:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.349 [2024-12-06 18:38:20.992014] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.349 [2024-12-06 18:38:20.992023] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.349 [2024-12-06 18:38:20.992027] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.992030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.349 [2024-12-06 18:38:20.992040] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:20.992372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-12-06 18:38:20.992381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.349 [2024-12-06 18:38:20.992386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.349 [2024-12-06 18:38:20.992394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.349 [2024-12-06 18:38:20.992401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.349 [2024-12-06 18:38:20.992408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.349 [2024-12-06 18:38:20.992414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.349 [2024-12-06 18:38:20.992418] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.349 [2024-12-06 18:38:20.992421] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.349 [2024-12-06 18:38:20.992424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.349 [2024-12-06 18:38:21.002069] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:26.349 [2024-12-06 18:38:21.002079] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:26.349 [2024-12-06 18:38:21.002083] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:21.002086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:26.349 [2024-12-06 18:38:21.002097] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:26.349 [2024-12-06 18:38:21.002435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.349 [2024-12-06 18:38:21.002445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62a470 with addr=10.0.0.2, port=4420 00:26:26.349 [2024-12-06 18:38:21.002450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62a470 is same with the state(6) to be set 00:26:26.349 [2024-12-06 18:38:21.002458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a470 (9): Bad file descriptor 00:26:26.349 [2024-12-06 18:38:21.002465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:26.349 [2024-12-06 18:38:21.002470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:26.349 [2024-12-06 18:38:21.002475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:26.349 [2024-12-06 18:38:21.002479] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:26.349 [2024-12-06 18:38:21.002483] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:26.349 [2024-12-06 18:38:21.002486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:26.349 [2024-12-06 18:38:21.010074] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:26.349 [2024-12-06 18:38:21.010088] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:26.349 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.350 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:26.612 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:26.613 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:26.613 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.613 18:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.996 [2024-12-06 18:38:22.375811] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.996 [2024-12-06 18:38:22.375826] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.996 [2024-12-06 18:38:22.375835] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.997 [2024-12-06 18:38:22.463076] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:27.997 [2024-12-06 18:38:22.732246] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:27.997 [2024-12-06 18:38:22.732897] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x626480:1 started. 00:26:27.997 [2024-12-06 18:38:22.734262] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.997 [2024-12-06 18:38:22.734284] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.997 [2024-12-06 18:38:22.744146] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x626480 was disconnected and freed. delete nvme_qpair. 00:26:27.997 request: 00:26:27.997 { 00:26:27.997 "name": "nvme", 00:26:27.997 "trtype": "tcp", 00:26:27.997 "traddr": "10.0.0.2", 00:26:27.997 "adrfam": "ipv4", 00:26:27.997 "trsvcid": "8009", 00:26:27.997 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.997 "wait_for_attach": true, 00:26:27.997 "method": "bdev_nvme_start_discovery", 00:26:27.997 "req_id": 1 00:26:27.997 } 00:26:27.997 Got JSON-RPC error response 00:26:27.997 response: 00:26:27.997 { 00:26:27.997 "code": -17, 00:26:27.997 "message": "File exists" 00:26:27.997 } 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.997 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 request: 00:26:28.258 { 00:26:28.258 "name": "nvme_second", 00:26:28.258 "trtype": "tcp", 00:26:28.258 "traddr": "10.0.0.2", 00:26:28.258 "adrfam": "ipv4", 00:26:28.258 "trsvcid": "8009", 00:26:28.258 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:28.258 "wait_for_attach": true, 00:26:28.258 "method": "bdev_nvme_start_discovery", 00:26:28.258 "req_id": 1 00:26:28.258 } 00:26:28.258 Got JSON-RPC error response 00:26:28.258 response: 00:26:28.258 { 00:26:28.258 "code": -17, 00:26:28.258 "message": "File exists" 00:26:28.258 } 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:28.258 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.259 18:38:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.217 [2024-12-06 18:38:23.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.217 [2024-12-06 18:38:23.993720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e380 with addr=10.0.0.2, port=8010 00:26:29.217 [2024-12-06 18:38:23.993730] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:29.217 [2024-12-06 18:38:23.993735] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:29.217 [2024-12-06 18:38:23.993740] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:30.600 [2024-12-06 18:38:24.995885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.600 [2024-12-06 18:38:24.995906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x65e380 with addr=10.0.0.2, port=8010 00:26:30.600 [2024-12-06 18:38:24.995915] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:30.600 [2024-12-06 18:38:24.995920] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:30.600 [2024-12-06 18:38:24.995924] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:31.539 [2024-12-06 18:38:25.998043] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:31.539 request: 00:26:31.539 { 00:26:31.539 "name": "nvme_second", 00:26:31.539 "trtype": "tcp", 00:26:31.539 "traddr": "10.0.0.2", 00:26:31.539 "adrfam": "ipv4", 00:26:31.539 "trsvcid": "8010", 00:26:31.539 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:31.539 "wait_for_attach": false, 00:26:31.539 "attach_timeout_ms": 3000, 00:26:31.539 "method": "bdev_nvme_start_discovery", 00:26:31.539 "req_id": 1 00:26:31.539 } 00:26:31.539 Got JSON-RPC error response 00:26:31.539 response: 00:26:31.539 { 00:26:31.539 "code": -110, 00:26:31.539 "message": "Connection timed out" 00:26:31.539 } 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2257811 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.539 rmmod nvme_tcp 00:26:31.539 rmmod nvme_fabrics 00:26:31.539 rmmod nvme_keyring 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2257483 ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2257483 ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2257483' 00:26:31.539 killing process with pid 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2257483 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.539 18:38:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.084 00:26:34.084 real 0m20.486s 00:26:34.084 user 0m23.896s 00:26:34.084 sys 0m7.274s 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.084 ************************************ 00:26:34.084 END TEST nvmf_host_discovery 00:26:34.084 ************************************ 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.084 ************************************ 00:26:34.084 START TEST nvmf_host_multipath_status 00:26:34.084 ************************************ 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:34.084 * Looking for test storage... 00:26:34.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.084 --rc genhtml_branch_coverage=1 00:26:34.084 --rc genhtml_function_coverage=1 00:26:34.084 --rc genhtml_legend=1 00:26:34.084 --rc geninfo_all_blocks=1 00:26:34.084 --rc geninfo_unexecuted_blocks=1 00:26:34.084 00:26:34.084 ' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.084 --rc genhtml_branch_coverage=1 00:26:34.084 --rc genhtml_function_coverage=1 00:26:34.084 --rc genhtml_legend=1 00:26:34.084 --rc geninfo_all_blocks=1 00:26:34.084 --rc geninfo_unexecuted_blocks=1 00:26:34.084 00:26:34.084 ' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.084 --rc genhtml_branch_coverage=1 00:26:34.084 --rc genhtml_function_coverage=1 00:26:34.084 --rc genhtml_legend=1 00:26:34.084 --rc geninfo_all_blocks=1 00:26:34.084 --rc geninfo_unexecuted_blocks=1 00:26:34.084 00:26:34.084 ' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.084 --rc genhtml_branch_coverage=1 00:26:34.084 --rc genhtml_function_coverage=1 00:26:34.084 --rc genhtml_legend=1 00:26:34.084 --rc geninfo_all_blocks=1 00:26:34.084 --rc geninfo_unexecuted_blocks=1 00:26:34.084 00:26:34.084 ' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.084 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:34.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:34.085 18:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:42.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:42.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:42.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:42.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:42.228 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:42.229 18:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:42.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:42.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:26:42.229 00:26:42.229 --- 10.0.0.2 ping statistics --- 00:26:42.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.229 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:42.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:42.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:26:42.229 00:26:42.229 --- 10.0.0.1 ping statistics --- 00:26:42.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:42.229 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2263990 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2263990 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2263990 ']' 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:42.229 18:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.229 [2024-12-06 18:38:36.273801] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:26:42.229 [2024-12-06 18:38:36.273869] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.229 [2024-12-06 18:38:36.373122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:42.229 [2024-12-06 18:38:36.424452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.229 [2024-12-06 18:38:36.424508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.229 [2024-12-06 18:38:36.424517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.229 [2024-12-06 18:38:36.424524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.229 [2024-12-06 18:38:36.424530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.229 [2024-12-06 18:38:36.426189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.229 [2024-12-06 18:38:36.426193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2263990 00:26:42.492 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:42.754 [2024-12-06 18:38:37.298534] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.754 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:42.754 Malloc0 00:26:43.016 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:43.016 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:43.279 18:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.540 [2024-12-06 18:38:38.131320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.540 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:43.802 [2024-12-06 18:38:38.331835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2264350 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2264350 /var/tmp/bdevperf.sock 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2264350 ']' 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:43.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.802 18:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.746 18:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.746 18:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:44.746 18:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:44.746 18:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:45.320 Nvme0n1 00:26:45.320 18:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:45.582 Nvme0n1 00:26:45.582 18:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:45.582 18:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:48.131 18:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:48.131 18:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:48.131 18:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:48.131 18:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.075 18:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:49.337 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:49.337 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:49.337 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.337 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:49.598 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.598 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:49.598 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.598 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.859 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:50.121 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.121 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:50.121 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:50.384 18:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:50.384 18:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:51.325 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:51.325 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:51.325 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.325 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.586 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:51.586 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:51.586 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.586 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:51.847 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.847 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:51.847 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.847 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.109 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:52.373 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.373 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:52.373 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.373 18:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:52.633 18:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.633 18:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:52.633 18:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:52.633 18:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:52.895 18:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:53.835 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:53.835 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.835 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.835 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:54.094 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.094 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:54.094 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.094 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.353 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.353 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.353 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:54.353 18:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.353 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.353 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:54.353 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.353 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:54.612 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.612 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:54.612 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:54.612 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.871 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.871 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:54.871 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.871 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.131 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.131 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:55.131 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:55.131 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:55.390 18:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:56.330 18:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:56.330 18:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:56.330 18:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.330 18:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:56.590 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.590 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:56.590 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.590 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:56.590 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.591 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:56.591 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.591 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.851 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.851 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.851 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.851 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:57.112 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.372 18:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:57.372 18:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.372 18:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:57.372 18:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:57.631 18:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:57.890 18:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:58.830 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:58.830 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.830 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.830 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.090 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.349 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.349 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.349 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.349 18:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.608 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.868 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.868 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:59.868 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:00.127 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:00.127 18:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:01.510 18:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:01.510 18:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:01.510 18:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.510 18:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.510 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:01.771 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.771 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:01.771 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.771 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.033 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.295 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.295 18:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:02.557 18:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:02.557 18:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:02.558 18:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:02.819 18:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:03.762 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:03.762 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:03.762 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.762 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.022 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.022 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:04.022 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.022 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.297 18:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.575 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.575 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:04.575 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.575 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:04.837 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:05.097 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.357 18:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:06.299 18:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:06.299 18:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.299 18:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.299 18:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.560 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.821 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.821 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.821 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.821 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.082 18:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.342 18:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.342 18:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:07.342 18:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:07.603 18:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:07.603 18:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.988 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.250 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.250 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.250 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.250 18:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.511 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.511 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.512 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.773 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.773 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:09.773 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.035 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:10.035 18:39:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.418 18:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.418 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.418 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.418 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.418 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.678 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.678 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.678 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:11.678 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.938 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.938 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.938 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.938 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2264350 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2264350 ']' 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2264350 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.198 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2264350 00:27:12.473 18:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:12.473 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:12.473 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2264350' 00:27:12.473 killing process with pid 2264350 00:27:12.473 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2264350 00:27:12.473 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2264350 00:27:12.473 { 00:27:12.473 "results": [ 00:27:12.473 { 00:27:12.473 "job": "Nvme0n1", 00:27:12.473 "core_mask": "0x4", 00:27:12.473 "workload": "verify", 00:27:12.473 "status": "terminated", 00:27:12.473 "verify_range": { 00:27:12.473 "start": 0, 00:27:12.473 "length": 16384 00:27:12.473 }, 00:27:12.473 "queue_depth": 128, 00:27:12.473 "io_size": 4096, 00:27:12.473 "runtime": 26.547186, 00:27:12.473 "iops": 11933.807221601566, 00:27:12.473 "mibps": 46.61643445938112, 00:27:12.473 "io_failed": 0, 00:27:12.473 "io_timeout": 0, 00:27:12.473 "avg_latency_us": 10706.35501525104, 00:27:12.473 "min_latency_us": 682.6666666666666, 00:27:12.473 "max_latency_us": 3019898.88 00:27:12.473 } 00:27:12.473 ], 00:27:12.474 "core_count": 1 00:27:12.474 } 00:27:12.474 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2264350 00:27:12.474 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.474 [2024-12-06 18:38:38.411792] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:27:12.474 [2024-12-06 18:38:38.411863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264350 ] 00:27:12.474 [2024-12-06 18:38:38.506994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.474 [2024-12-06 18:38:38.557586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.474 Running I/O for 90 seconds... 00:27:12.474 10629.00 IOPS, 41.52 MiB/s [2024-12-06T17:39:07.258Z] 10981.00 IOPS, 42.89 MiB/s [2024-12-06T17:39:07.258Z] 11116.00 IOPS, 43.42 MiB/s [2024-12-06T17:39:07.258Z] 11431.00 IOPS, 44.65 MiB/s [2024-12-06T17:39:07.258Z] 11728.60 IOPS, 45.81 MiB/s [2024-12-06T17:39:07.258Z] 11886.83 IOPS, 46.43 MiB/s [2024-12-06T17:39:07.258Z] 12000.86 IOPS, 46.88 MiB/s [2024-12-06T17:39:07.258Z] 12117.88 IOPS, 47.34 MiB/s [2024-12-06T17:39:07.258Z] 12197.00 IOPS, 47.64 MiB/s [2024-12-06T17:39:07.258Z] 12251.10 IOPS, 47.86 MiB/s [2024-12-06T17:39:07.258Z] 12313.09 IOPS, 48.10 MiB/s [2024-12-06T17:39:07.258Z] [2024-12-06 18:38:52.223272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.223307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.223858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.223870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.474 [2024-12-06 18:38:52.224876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.474 [2024-12-06 18:38:52.224890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.474 [2024-12-06 18:38:52.224896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.224910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.224915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.224928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.224933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.224946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.224952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.224965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.224983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.224988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.475 [2024-12-06 18:38:52.225863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.475 [2024-12-06 18:38:52.225878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.225985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.225999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:38:52.226519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:38:52.226525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.476 12138.75 IOPS, 47.42 MiB/s [2024-12-06T17:39:07.260Z] 11205.00 IOPS, 43.77 MiB/s [2024-12-06T17:39:07.260Z] 10404.64 IOPS, 40.64 MiB/s [2024-12-06T17:39:07.260Z] 9887.53 IOPS, 38.62 MiB/s [2024-12-06T17:39:07.260Z] 10069.50 IOPS, 39.33 MiB/s [2024-12-06T17:39:07.260Z] 10266.94 IOPS, 40.11 MiB/s [2024-12-06T17:39:07.260Z] 10645.89 IOPS, 41.59 MiB/s [2024-12-06T17:39:07.260Z] 10990.74 IOPS, 42.93 MiB/s [2024-12-06T17:39:07.260Z] 11154.35 IOPS, 43.57 MiB/s [2024-12-06T17:39:07.260Z] 11234.95 IOPS, 43.89 MiB/s [2024-12-06T17:39:07.260Z] 11321.09 IOPS, 44.22 MiB/s [2024-12-06T17:39:07.260Z] 11556.26 IOPS, 45.14 MiB/s [2024-12-06T17:39:07.260Z] 11778.00 IOPS, 46.01 MiB/s [2024-12-06T17:39:07.260Z] [2024-12-06 18:39:04.765961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.476 [2024-12-06 18:39:04.766297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.476 [2024-12-06 18:39:04.766303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.477 [2024-12-06 18:39:04.767312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.767583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.477 [2024-12-06 18:39:04.768458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.477 [2024-12-06 18:39:04.768468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.768687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.768992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.768998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.478 [2024-12-06 18:39:04.769241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.769984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.478 [2024-12-06 18:39:04.770001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.478 [2024-12-06 18:39:04.770013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.770829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.770989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.770994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.771004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.771009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.771019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.771025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.771040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.771051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.479 [2024-12-06 18:39:04.771056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.479 [2024-12-06 18:39:04.771066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.479 [2024-12-06 18:39:04.771072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.771082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.771088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.771099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.771104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.772814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.772889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.480 [2024-12-06 18:39:04.772895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.480 [2024-12-06 18:39:04.773742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.480 [2024-12-06 18:39:04.773752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.773758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.773772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.773777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.773787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.773792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.773803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.773808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.774580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.774595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.774662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.783668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.783685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.783702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.783737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.783753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.783769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.783785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.783796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.783801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.784388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.784404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.784501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.784511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.784517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.785401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.481 [2024-12-06 18:39:04.785415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.785427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.785433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.785443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.785449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.481 [2024-12-06 18:39:04.785459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.481 [2024-12-06 18:39:04.785467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.785706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.785716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.785722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.786350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.786398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.482 [2024-12-06 18:39:04.787589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.482 [2024-12-06 18:39:04.787651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.482 [2024-12-06 18:39:04.787657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.787723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.787739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.787787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.787917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.787943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.787948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.483 [2024-12-06 18:39:04.789461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.483 [2024-12-06 18:39:04.789477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.483 [2024-12-06 18:39:04.789488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.789519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.789536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.789552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.789570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.789575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.791503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.791530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.791535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.484 [2024-12-06 18:39:04.792272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.484 [2024-12-06 18:39:04.792347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.484 [2024-12-06 18:39:04.792360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.792958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.792990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.792997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.485 [2024-12-06 18:39:04.793936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.485 [2024-12-06 18:39:04.793967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.485 [2024-12-06 18:39:04.793974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.793991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.794017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.794037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.794056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.794076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.794096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.794115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.794135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.794945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.794985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.794997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.795415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.795427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.795434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.797183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.797204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.797226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.486 [2024-12-06 18:39:04.797361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.797379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.486 [2024-12-06 18:39:04.797399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.486 [2024-12-06 18:39:04.797411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.797478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.797553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.797612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.797632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.797657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.797670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.797676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.798527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.798650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.798672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.798685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.798692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.487 [2024-12-06 18:39:04.799941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.487 [2024-12-06 18:39:04.799954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.487 [2024-12-06 18:39:04.799961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.799973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.799980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.799994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.800685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.800698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.800705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.801911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.801941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.801948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.801961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.801969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.801982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.801989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.488 [2024-12-06 18:39:04.802388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.488 [2024-12-06 18:39:04.802489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.488 [2024-12-06 18:39:04.802502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.802529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.802550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.802570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.802591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.802611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.802730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.802737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.804801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.804845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.804851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.805686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.805720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.489 [2024-12-06 18:39:04.805736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.805754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.489 [2024-12-06 18:39:04.805764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.489 [2024-12-06 18:39:04.805773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.805857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.805874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.805993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.806308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.806336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.806342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.490 [2024-12-06 18:39:04.808662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.808679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.808695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.808712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.490 [2024-12-06 18:39:04.808723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.490 [2024-12-06 18:39:04.808729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.808780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.808796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.808862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.808930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.808964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.808994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.808999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.491 [2024-12-06 18:39:04.809922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.491 [2024-12-06 18:39:04.809933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.491 [2024-12-06 18:39:04.809939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.809950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.809956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.809968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.809973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.809984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.809989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.810001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.810007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.810017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.810024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.810035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.810041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.811981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.811993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.811998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.812016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.812049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.812081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.492 [2024-12-06 18:39:04.812098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.492 [2024-12-06 18:39:04.812182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.492 [2024-12-06 18:39:04.812193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.812900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.812986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.812998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.813003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.493 [2024-12-06 18:39:04.814577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.493 [2024-12-06 18:39:04.814611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:12.493 [2024-12-06 18:39:04.814621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.814846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.814891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.814897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.816731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.816742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.816750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.817497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.817539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.494 [2024-12-06 18:39:04.817555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.494 [2024-12-06 18:39:04.817570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.494 [2024-12-06 18:39:04.817581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.817848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.817859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.817865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.818300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.818316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:12.495 [2024-12-06 18:39:04.818436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:12.495 [2024-12-06 18:39:04.818447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.495 [2024-12-06 18:39:04.818453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:12.495 11866.04 IOPS, 46.35 MiB/s [2024-12-06T17:39:07.279Z] 11913.19 IOPS, 46.54 MiB/s [2024-12-06T17:39:07.279Z] Received shutdown signal, test time was about 26.547797 seconds 00:27:12.495 00:27:12.495 Latency(us) 00:27:12.495 [2024-12-06T17:39:07.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.495 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:12.495 Verification LBA range: start 0x0 length 0x4000 00:27:12.495 Nvme0n1 : 26.55 11933.81 46.62 0.00 0.00 10706.36 682.67 3019898.88 00:27:12.495 [2024-12-06T17:39:07.279Z] =================================================================================================================== 00:27:12.495 [2024-12-06T17:39:07.279Z] Total : 11933.81 46.62 0.00 0.00 10706.36 682.67 3019898.88 00:27:12.495 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:12.755 rmmod nvme_tcp 00:27:12.755 rmmod nvme_fabrics 00:27:12.755 rmmod nvme_keyring 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2263990 ']' 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2263990 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2263990 ']' 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2263990 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263990 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263990' 00:27:12.755 killing process with pid 2263990 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2263990 00:27:12.755 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2263990 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.015 18:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.928 00:27:14.928 real 0m41.169s 00:27:14.928 user 1m46.217s 00:27:14.928 sys 0m11.446s 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:14.928 ************************************ 00:27:14.928 END TEST nvmf_host_multipath_status 00:27:14.928 ************************************ 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.928 ************************************ 00:27:14.928 START TEST nvmf_discovery_remove_ifc 00:27:14.928 ************************************ 00:27:14.928 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:15.190 * Looking for test storage... 00:27:15.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:15.190 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:15.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.191 --rc genhtml_branch_coverage=1 00:27:15.191 --rc genhtml_function_coverage=1 00:27:15.191 --rc genhtml_legend=1 00:27:15.191 --rc geninfo_all_blocks=1 00:27:15.191 --rc geninfo_unexecuted_blocks=1 00:27:15.191 00:27:15.191 ' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:15.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.191 --rc genhtml_branch_coverage=1 00:27:15.191 --rc genhtml_function_coverage=1 00:27:15.191 --rc genhtml_legend=1 00:27:15.191 --rc geninfo_all_blocks=1 00:27:15.191 --rc geninfo_unexecuted_blocks=1 00:27:15.191 00:27:15.191 ' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:15.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.191 --rc genhtml_branch_coverage=1 00:27:15.191 --rc genhtml_function_coverage=1 00:27:15.191 --rc genhtml_legend=1 00:27:15.191 --rc geninfo_all_blocks=1 00:27:15.191 --rc geninfo_unexecuted_blocks=1 00:27:15.191 00:27:15.191 ' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:15.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:15.191 --rc genhtml_branch_coverage=1 00:27:15.191 --rc genhtml_function_coverage=1 00:27:15.191 --rc genhtml_legend=1 00:27:15.191 --rc geninfo_all_blocks=1 00:27:15.191 --rc geninfo_unexecuted_blocks=1 00:27:15.191 00:27:15.191 ' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.191 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:15.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:15.192 18:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:23.338 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:23.338 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:23.338 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:23.338 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.338 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:23.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:23.339 00:27:23.339 --- 10.0.0.2 ping statistics --- 00:27:23.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.339 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:23.339 00:27:23.339 --- 10.0.0.1 ping statistics --- 00:27:23.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.339 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2274808 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2274808 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2274808 ']' 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.339 18:39:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.339 [2024-12-06 18:39:17.491373] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:27:23.339 [2024-12-06 18:39:17.491442] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.339 [2024-12-06 18:39:17.590095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.339 [2024-12-06 18:39:17.639989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.339 [2024-12-06 18:39:17.640041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.339 [2024-12-06 18:39:17.640049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.339 [2024-12-06 18:39:17.640056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.339 [2024-12-06 18:39:17.640062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.339 [2024-12-06 18:39:17.640827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.600 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.600 [2024-12-06 18:39:18.360121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.600 [2024-12-06 18:39:18.368356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.600 null0 00:27:23.861 [2024-12-06 18:39:18.400322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2275083 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2275083 /tmp/host.sock 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2275083 ']' 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.861 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.861 18:39:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.861 [2024-12-06 18:39:18.477867] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:27:23.861 [2024-12-06 18:39:18.477933] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2275083 ] 00:27:23.861 [2024-12-06 18:39:18.571145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.861 [2024-12-06 18:39:18.623767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.804 18:39:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.746 [2024-12-06 18:39:20.460588] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:25.746 [2024-12-06 18:39:20.460616] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:25.746 [2024-12-06 18:39:20.460629] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.007 [2024-12-06 18:39:20.589048] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:26.007 [2024-12-06 18:39:20.688808] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:26.007 [2024-12-06 18:39:20.689783] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1cc2ed0:1 started. 00:27:26.007 [2024-12-06 18:39:20.691341] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:26.007 [2024-12-06 18:39:20.691385] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:26.007 [2024-12-06 18:39:20.691409] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:26.007 [2024-12-06 18:39:20.691423] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:26.007 [2024-12-06 18:39:20.691443] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.007 [2024-12-06 18:39:20.698033] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1cc2ed0 was disconnected and freed. delete nvme_qpair. 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:26.007 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.268 18:39:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.208 18:39:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.592 18:39:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.592 18:39:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.592 18:39:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.534 18:39:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:30.479 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.479 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.479 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.479 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.480 18:39:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.421 [2024-12-06 18:39:26.132011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:31.421 [2024-12-06 18:39:26.132044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.421 [2024-12-06 18:39:26.132053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.421 [2024-12-06 18:39:26.132060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.421 [2024-12-06 18:39:26.132066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.421 [2024-12-06 18:39:26.132072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.421 [2024-12-06 18:39:26.132077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.421 [2024-12-06 18:39:26.132082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.421 [2024-12-06 18:39:26.132088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.421 [2024-12-06 18:39:26.132094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.421 [2024-12-06 18:39:26.132098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.421 [2024-12-06 18:39:26.132104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f6d0 is same with the state(6) to be set 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.421 [2024-12-06 18:39:26.142032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f6d0 (9): Bad file descriptor 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.421 18:39:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.421 [2024-12-06 18:39:26.152066] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:31.421 [2024-12-06 18:39:26.152079] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:31.421 [2024-12-06 18:39:26.152084] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:31.421 [2024-12-06 18:39:26.152089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:31.421 [2024-12-06 18:39:26.152106] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:32.802 [2024-12-06 18:39:27.196764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:32.802 [2024-12-06 18:39:27.196863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9f6d0 with addr=10.0.0.2, port=4420 00:27:32.802 [2024-12-06 18:39:27.196895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9f6d0 is same with the state(6) to be set 00:27:32.802 [2024-12-06 18:39:27.196952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9f6d0 (9): Bad file descriptor 00:27:32.802 [2024-12-06 18:39:27.197096] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:32.802 [2024-12-06 18:39:27.197156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:32.802 [2024-12-06 18:39:27.197180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:32.802 [2024-12-06 18:39:27.197204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:32.802 [2024-12-06 18:39:27.197225] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:32.802 [2024-12-06 18:39:27.197244] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:32.802 [2024-12-06 18:39:27.197259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:32.802 [2024-12-06 18:39:27.197282] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:32.802 [2024-12-06 18:39:27.197297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:32.802 18:39:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.802 18:39:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.802 18:39:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.743 [2024-12-06 18:39:28.199703] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:33.743 [2024-12-06 18:39:28.199719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:33.743 [2024-12-06 18:39:28.199728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:33.743 [2024-12-06 18:39:28.199734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:33.743 [2024-12-06 18:39:28.199740] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:33.743 [2024-12-06 18:39:28.199745] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:33.743 [2024-12-06 18:39:28.199749] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:33.743 [2024-12-06 18:39:28.199752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:33.743 [2024-12-06 18:39:28.199771] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:33.743 [2024-12-06 18:39:28.199793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.743 [2024-12-06 18:39:28.199800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 18:39:28.199808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.743 [2024-12-06 18:39:28.199813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 18:39:28.199818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.743 [2024-12-06 18:39:28.199823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 18:39:28.199829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.743 [2024-12-06 18:39:28.199835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 18:39:28.199841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.743 [2024-12-06 18:39:28.199846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.743 [2024-12-06 18:39:28.199851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:33.743 [2024-12-06 18:39:28.199952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c8edf0 (9): Bad file descriptor 00:27:33.743 [2024-12-06 18:39:28.200962] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:33.743 [2024-12-06 18:39:28.200971] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:33.743 18:39:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:34.684 18:39:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.625 [2024-12-06 18:39:30.254661] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:35.625 [2024-12-06 18:39:30.254684] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:35.625 [2024-12-06 18:39:30.254695] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:35.625 [2024-12-06 18:39:30.385067] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:35.885 [2024-12-06 18:39:30.441716] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:35.885 [2024-12-06 18:39:30.442415] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1ccc640:1 started. 00:27:35.885 [2024-12-06 18:39:30.443325] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:35.885 [2024-12-06 18:39:30.443354] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:35.885 [2024-12-06 18:39:30.443369] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:35.885 [2024-12-06 18:39:30.443380] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:35.885 [2024-12-06 18:39:30.443386] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.885 [2024-12-06 18:39:30.493216] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1ccc640 was disconnected and freed. delete nvme_qpair. 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2275083 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2275083 ']' 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2275083 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2275083 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2275083' 00:27:35.885 killing process with pid 2275083 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2275083 00:27:35.885 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2275083 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.145 rmmod nvme_tcp 00:27:36.145 rmmod nvme_fabrics 00:27:36.145 rmmod nvme_keyring 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2274808 ']' 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2274808 00:27:36.145 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2274808 ']' 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2274808 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274808 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274808' 00:27:36.146 killing process with pid 2274808 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2274808 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2274808 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.146 18:39:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.689 18:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.689 00:27:38.689 real 0m23.279s 00:27:38.689 user 0m27.210s 00:27:38.689 sys 0m7.102s 00:27:38.689 18:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.689 18:39:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.689 ************************************ 00:27:38.689 END TEST nvmf_discovery_remove_ifc 00:27:38.689 ************************************ 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.689 ************************************ 00:27:38.689 START TEST nvmf_identify_kernel_target 00:27:38.689 ************************************ 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.689 * Looking for test storage... 00:27:38.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:38.689 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.690 --rc genhtml_branch_coverage=1 00:27:38.690 --rc genhtml_function_coverage=1 00:27:38.690 --rc genhtml_legend=1 00:27:38.690 --rc geninfo_all_blocks=1 00:27:38.690 --rc geninfo_unexecuted_blocks=1 00:27:38.690 00:27:38.690 ' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.690 --rc genhtml_branch_coverage=1 00:27:38.690 --rc genhtml_function_coverage=1 00:27:38.690 --rc genhtml_legend=1 00:27:38.690 --rc geninfo_all_blocks=1 00:27:38.690 --rc geninfo_unexecuted_blocks=1 00:27:38.690 00:27:38.690 ' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.690 --rc genhtml_branch_coverage=1 00:27:38.690 --rc genhtml_function_coverage=1 00:27:38.690 --rc genhtml_legend=1 00:27:38.690 --rc geninfo_all_blocks=1 00:27:38.690 --rc geninfo_unexecuted_blocks=1 00:27:38.690 00:27:38.690 ' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.690 --rc genhtml_branch_coverage=1 00:27:38.690 --rc genhtml_function_coverage=1 00:27:38.690 --rc genhtml_legend=1 00:27:38.690 --rc geninfo_all_blocks=1 00:27:38.690 --rc geninfo_unexecuted_blocks=1 00:27:38.690 00:27:38.690 ' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.690 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.691 18:39:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:46.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:46.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:46.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:46.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:46.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:27:46.827 00:27:46.827 --- 10.0.0.2 ping statistics --- 00:27:46.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.827 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:27:46.827 00:27:46.827 --- 10.0.0.1 ping statistics --- 00:27:46.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.827 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:46.827 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:46.828 18:39:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:50.121 Waiting for block devices as requested 00:27:50.121 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:50.121 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:50.381 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:50.641 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:50.641 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:50.641 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:50.641 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:50.901 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:50.901 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:50.901 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:51.471 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:51.471 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.471 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:51.471 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:51.471 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.472 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:51.472 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:51.472 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:51.472 18:39:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.472 No valid GPT data, bailing 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:51.472 00:27:51.472 Discovery Log Number of Records 2, Generation counter 2 00:27:51.472 =====Discovery Log Entry 0====== 00:27:51.472 trtype: tcp 00:27:51.472 adrfam: ipv4 00:27:51.472 subtype: current discovery subsystem 00:27:51.472 treq: not specified, sq flow control disable supported 00:27:51.472 portid: 1 00:27:51.472 trsvcid: 4420 00:27:51.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.472 traddr: 10.0.0.1 00:27:51.472 eflags: none 00:27:51.472 sectype: none 00:27:51.472 =====Discovery Log Entry 1====== 00:27:51.472 trtype: tcp 00:27:51.472 adrfam: ipv4 00:27:51.472 subtype: nvme subsystem 00:27:51.472 treq: not specified, sq flow control disable supported 00:27:51.472 portid: 1 00:27:51.472 trsvcid: 4420 00:27:51.472 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:51.472 traddr: 10.0.0.1 00:27:51.472 eflags: none 00:27:51.472 sectype: none 00:27:51.472 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:51.472 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:51.734 ===================================================== 00:27:51.734 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:51.734 ===================================================== 00:27:51.734 Controller Capabilities/Features 00:27:51.734 ================================ 00:27:51.734 Vendor ID: 0000 00:27:51.734 Subsystem Vendor ID: 0000 00:27:51.734 Serial Number: e1aa221e9d2f9d7fc0a5 00:27:51.734 Model Number: Linux 00:27:51.734 Firmware Version: 6.8.9-20 00:27:51.734 Recommended Arb Burst: 0 00:27:51.734 IEEE OUI Identifier: 00 00 00 00:27:51.734 Multi-path I/O 00:27:51.734 May have multiple subsystem ports: No 00:27:51.734 May have multiple controllers: No 00:27:51.734 Associated with SR-IOV VF: No 00:27:51.734 Max Data Transfer Size: Unlimited 00:27:51.734 Max Number of Namespaces: 0 00:27:51.734 Max Number of I/O Queues: 1024 00:27:51.734 NVMe Specification Version (VS): 1.3 00:27:51.734 NVMe Specification Version (Identify): 1.3 00:27:51.734 Maximum Queue Entries: 1024 00:27:51.734 Contiguous Queues Required: No 00:27:51.734 Arbitration Mechanisms Supported 00:27:51.734 Weighted Round Robin: Not Supported 00:27:51.734 Vendor Specific: Not Supported 00:27:51.734 Reset Timeout: 7500 ms 00:27:51.734 Doorbell Stride: 4 bytes 00:27:51.734 NVM Subsystem Reset: Not Supported 00:27:51.734 Command Sets Supported 00:27:51.734 NVM Command Set: Supported 00:27:51.734 Boot Partition: Not Supported 00:27:51.734 Memory Page Size Minimum: 4096 bytes 00:27:51.734 Memory Page Size Maximum: 4096 bytes 00:27:51.734 Persistent Memory Region: Not Supported 00:27:51.734 Optional Asynchronous Events Supported 00:27:51.734 Namespace Attribute Notices: Not Supported 00:27:51.734 Firmware Activation Notices: Not Supported 00:27:51.734 ANA Change Notices: Not Supported 00:27:51.734 PLE Aggregate Log Change Notices: Not Supported 00:27:51.734 LBA Status Info Alert Notices: Not Supported 00:27:51.734 EGE Aggregate Log Change Notices: Not Supported 00:27:51.734 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.734 Zone Descriptor Change Notices: Not Supported 00:27:51.734 Discovery Log Change Notices: Supported 00:27:51.734 Controller Attributes 00:27:51.734 128-bit Host Identifier: Not Supported 00:27:51.734 Non-Operational Permissive Mode: Not Supported 00:27:51.734 NVM Sets: Not Supported 00:27:51.734 Read Recovery Levels: Not Supported 00:27:51.734 Endurance Groups: Not Supported 00:27:51.734 Predictable Latency Mode: Not Supported 00:27:51.734 Traffic Based Keep ALive: Not Supported 00:27:51.734 Namespace Granularity: Not Supported 00:27:51.734 SQ Associations: Not Supported 00:27:51.734 UUID List: Not Supported 00:27:51.734 Multi-Domain Subsystem: Not Supported 00:27:51.734 Fixed Capacity Management: Not Supported 00:27:51.734 Variable Capacity Management: Not Supported 00:27:51.734 Delete Endurance Group: Not Supported 00:27:51.734 Delete NVM Set: Not Supported 00:27:51.734 Extended LBA Formats Supported: Not Supported 00:27:51.734 Flexible Data Placement Supported: Not Supported 00:27:51.734 00:27:51.734 Controller Memory Buffer Support 00:27:51.734 ================================ 00:27:51.734 Supported: No 00:27:51.734 00:27:51.734 Persistent Memory Region Support 00:27:51.734 ================================ 00:27:51.734 Supported: No 00:27:51.734 00:27:51.734 Admin Command Set Attributes 00:27:51.734 ============================ 00:27:51.734 Security Send/Receive: Not Supported 00:27:51.734 Format NVM: Not Supported 00:27:51.734 Firmware Activate/Download: Not Supported 00:27:51.734 Namespace Management: Not Supported 00:27:51.734 Device Self-Test: Not Supported 00:27:51.734 Directives: Not Supported 00:27:51.734 NVMe-MI: Not Supported 00:27:51.734 Virtualization Management: Not Supported 00:27:51.734 Doorbell Buffer Config: Not Supported 00:27:51.734 Get LBA Status Capability: Not Supported 00:27:51.734 Command & Feature Lockdown Capability: Not Supported 00:27:51.734 Abort Command Limit: 1 00:27:51.734 Async Event Request Limit: 1 00:27:51.734 Number of Firmware Slots: N/A 00:27:51.734 Firmware Slot 1 Read-Only: N/A 00:27:51.734 Firmware Activation Without Reset: N/A 00:27:51.734 Multiple Update Detection Support: N/A 00:27:51.734 Firmware Update Granularity: No Information Provided 00:27:51.734 Per-Namespace SMART Log: No 00:27:51.734 Asymmetric Namespace Access Log Page: Not Supported 00:27:51.734 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:51.734 Command Effects Log Page: Not Supported 00:27:51.734 Get Log Page Extended Data: Supported 00:27:51.734 Telemetry Log Pages: Not Supported 00:27:51.734 Persistent Event Log Pages: Not Supported 00:27:51.734 Supported Log Pages Log Page: May Support 00:27:51.734 Commands Supported & Effects Log Page: Not Supported 00:27:51.734 Feature Identifiers & Effects Log Page:May Support 00:27:51.734 NVMe-MI Commands & Effects Log Page: May Support 00:27:51.734 Data Area 4 for Telemetry Log: Not Supported 00:27:51.734 Error Log Page Entries Supported: 1 00:27:51.734 Keep Alive: Not Supported 00:27:51.734 00:27:51.734 NVM Command Set Attributes 00:27:51.734 ========================== 00:27:51.734 Submission Queue Entry Size 00:27:51.734 Max: 1 00:27:51.734 Min: 1 00:27:51.734 Completion Queue Entry Size 00:27:51.734 Max: 1 00:27:51.734 Min: 1 00:27:51.734 Number of Namespaces: 0 00:27:51.734 Compare Command: Not Supported 00:27:51.734 Write Uncorrectable Command: Not Supported 00:27:51.734 Dataset Management Command: Not Supported 00:27:51.734 Write Zeroes Command: Not Supported 00:27:51.734 Set Features Save Field: Not Supported 00:27:51.734 Reservations: Not Supported 00:27:51.734 Timestamp: Not Supported 00:27:51.734 Copy: Not Supported 00:27:51.734 Volatile Write Cache: Not Present 00:27:51.734 Atomic Write Unit (Normal): 1 00:27:51.734 Atomic Write Unit (PFail): 1 00:27:51.734 Atomic Compare & Write Unit: 1 00:27:51.734 Fused Compare & Write: Not Supported 00:27:51.734 Scatter-Gather List 00:27:51.734 SGL Command Set: Supported 00:27:51.734 SGL Keyed: Not Supported 00:27:51.734 SGL Bit Bucket Descriptor: Not Supported 00:27:51.734 SGL Metadata Pointer: Not Supported 00:27:51.734 Oversized SGL: Not Supported 00:27:51.734 SGL Metadata Address: Not Supported 00:27:51.734 SGL Offset: Supported 00:27:51.734 Transport SGL Data Block: Not Supported 00:27:51.734 Replay Protected Memory Block: Not Supported 00:27:51.734 00:27:51.734 Firmware Slot Information 00:27:51.734 ========================= 00:27:51.734 Active slot: 0 00:27:51.734 00:27:51.734 00:27:51.734 Error Log 00:27:51.734 ========= 00:27:51.734 00:27:51.734 Active Namespaces 00:27:51.734 ================= 00:27:51.734 Discovery Log Page 00:27:51.734 ================== 00:27:51.734 Generation Counter: 2 00:27:51.734 Number of Records: 2 00:27:51.734 Record Format: 0 00:27:51.734 00:27:51.734 Discovery Log Entry 0 00:27:51.735 ---------------------- 00:27:51.735 Transport Type: 3 (TCP) 00:27:51.735 Address Family: 1 (IPv4) 00:27:51.735 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:51.735 Entry Flags: 00:27:51.735 Duplicate Returned Information: 0 00:27:51.735 Explicit Persistent Connection Support for Discovery: 0 00:27:51.735 Transport Requirements: 00:27:51.735 Secure Channel: Not Specified 00:27:51.735 Port ID: 1 (0x0001) 00:27:51.735 Controller ID: 65535 (0xffff) 00:27:51.735 Admin Max SQ Size: 32 00:27:51.735 Transport Service Identifier: 4420 00:27:51.735 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:51.735 Transport Address: 10.0.0.1 00:27:51.735 Discovery Log Entry 1 00:27:51.735 ---------------------- 00:27:51.735 Transport Type: 3 (TCP) 00:27:51.735 Address Family: 1 (IPv4) 00:27:51.735 Subsystem Type: 2 (NVM Subsystem) 00:27:51.735 Entry Flags: 00:27:51.735 Duplicate Returned Information: 0 00:27:51.735 Explicit Persistent Connection Support for Discovery: 0 00:27:51.735 Transport Requirements: 00:27:51.735 Secure Channel: Not Specified 00:27:51.735 Port ID: 1 (0x0001) 00:27:51.735 Controller ID: 65535 (0xffff) 00:27:51.735 Admin Max SQ Size: 32 00:27:51.735 Transport Service Identifier: 4420 00:27:51.735 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:51.735 Transport Address: 10.0.0.1 00:27:51.735 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:51.735 get_feature(0x01) failed 00:27:51.735 get_feature(0x02) failed 00:27:51.735 get_feature(0x04) failed 00:27:51.735 ===================================================== 00:27:51.735 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:51.735 ===================================================== 00:27:51.735 Controller Capabilities/Features 00:27:51.735 ================================ 00:27:51.735 Vendor ID: 0000 00:27:51.735 Subsystem Vendor ID: 0000 00:27:51.735 Serial Number: 01f5ca6c6cd6faf15ec3 00:27:51.735 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.735 Firmware Version: 6.8.9-20 00:27:51.735 Recommended Arb Burst: 6 00:27:51.735 IEEE OUI Identifier: 00 00 00 00:27:51.735 Multi-path I/O 00:27:51.735 May have multiple subsystem ports: Yes 00:27:51.735 May have multiple controllers: Yes 00:27:51.735 Associated with SR-IOV VF: No 00:27:51.735 Max Data Transfer Size: Unlimited 00:27:51.735 Max Number of Namespaces: 1024 00:27:51.735 Max Number of I/O Queues: 128 00:27:51.735 NVMe Specification Version (VS): 1.3 00:27:51.735 NVMe Specification Version (Identify): 1.3 00:27:51.735 Maximum Queue Entries: 1024 00:27:51.735 Contiguous Queues Required: No 00:27:51.735 Arbitration Mechanisms Supported 00:27:51.735 Weighted Round Robin: Not Supported 00:27:51.735 Vendor Specific: Not Supported 00:27:51.735 Reset Timeout: 7500 ms 00:27:51.735 Doorbell Stride: 4 bytes 00:27:51.735 NVM Subsystem Reset: Not Supported 00:27:51.735 Command Sets Supported 00:27:51.735 NVM Command Set: Supported 00:27:51.735 Boot Partition: Not Supported 00:27:51.735 Memory Page Size Minimum: 4096 bytes 00:27:51.735 Memory Page Size Maximum: 4096 bytes 00:27:51.735 Persistent Memory Region: Not Supported 00:27:51.735 Optional Asynchronous Events Supported 00:27:51.735 Namespace Attribute Notices: Supported 00:27:51.735 Firmware Activation Notices: Not Supported 00:27:51.735 ANA Change Notices: Supported 00:27:51.735 PLE Aggregate Log Change Notices: Not Supported 00:27:51.735 LBA Status Info Alert Notices: Not Supported 00:27:51.735 EGE Aggregate Log Change Notices: Not Supported 00:27:51.735 Normal NVM Subsystem Shutdown event: Not Supported 00:27:51.735 Zone Descriptor Change Notices: Not Supported 00:27:51.735 Discovery Log Change Notices: Not Supported 00:27:51.735 Controller Attributes 00:27:51.735 128-bit Host Identifier: Supported 00:27:51.735 Non-Operational Permissive Mode: Not Supported 00:27:51.735 NVM Sets: Not Supported 00:27:51.735 Read Recovery Levels: Not Supported 00:27:51.735 Endurance Groups: Not Supported 00:27:51.735 Predictable Latency Mode: Not Supported 00:27:51.735 Traffic Based Keep ALive: Supported 00:27:51.735 Namespace Granularity: Not Supported 00:27:51.735 SQ Associations: Not Supported 00:27:51.735 UUID List: Not Supported 00:27:51.735 Multi-Domain Subsystem: Not Supported 00:27:51.735 Fixed Capacity Management: Not Supported 00:27:51.735 Variable Capacity Management: Not Supported 00:27:51.735 Delete Endurance Group: Not Supported 00:27:51.735 Delete NVM Set: Not Supported 00:27:51.735 Extended LBA Formats Supported: Not Supported 00:27:51.735 Flexible Data Placement Supported: Not Supported 00:27:51.735 00:27:51.735 Controller Memory Buffer Support 00:27:51.735 ================================ 00:27:51.735 Supported: No 00:27:51.735 00:27:51.735 Persistent Memory Region Support 00:27:51.735 ================================ 00:27:51.735 Supported: No 00:27:51.735 00:27:51.735 Admin Command Set Attributes 00:27:51.735 ============================ 00:27:51.735 Security Send/Receive: Not Supported 00:27:51.735 Format NVM: Not Supported 00:27:51.735 Firmware Activate/Download: Not Supported 00:27:51.735 Namespace Management: Not Supported 00:27:51.735 Device Self-Test: Not Supported 00:27:51.735 Directives: Not Supported 00:27:51.735 NVMe-MI: Not Supported 00:27:51.735 Virtualization Management: Not Supported 00:27:51.735 Doorbell Buffer Config: Not Supported 00:27:51.735 Get LBA Status Capability: Not Supported 00:27:51.735 Command & Feature Lockdown Capability: Not Supported 00:27:51.735 Abort Command Limit: 4 00:27:51.735 Async Event Request Limit: 4 00:27:51.735 Number of Firmware Slots: N/A 00:27:51.735 Firmware Slot 1 Read-Only: N/A 00:27:51.735 Firmware Activation Without Reset: N/A 00:27:51.735 Multiple Update Detection Support: N/A 00:27:51.735 Firmware Update Granularity: No Information Provided 00:27:51.735 Per-Namespace SMART Log: Yes 00:27:51.735 Asymmetric Namespace Access Log Page: Supported 00:27:51.735 ANA Transition Time : 10 sec 00:27:51.735 00:27:51.735 Asymmetric Namespace Access Capabilities 00:27:51.735 ANA Optimized State : Supported 00:27:51.735 ANA Non-Optimized State : Supported 00:27:51.735 ANA Inaccessible State : Supported 00:27:51.735 ANA Persistent Loss State : Supported 00:27:51.735 ANA Change State : Supported 00:27:51.735 ANAGRPID is not changed : No 00:27:51.735 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:51.735 00:27:51.735 ANA Group Identifier Maximum : 128 00:27:51.735 Number of ANA Group Identifiers : 128 00:27:51.735 Max Number of Allowed Namespaces : 1024 00:27:51.735 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:51.735 Command Effects Log Page: Supported 00:27:51.735 Get Log Page Extended Data: Supported 00:27:51.735 Telemetry Log Pages: Not Supported 00:27:51.735 Persistent Event Log Pages: Not Supported 00:27:51.735 Supported Log Pages Log Page: May Support 00:27:51.735 Commands Supported & Effects Log Page: Not Supported 00:27:51.735 Feature Identifiers & Effects Log Page:May Support 00:27:51.735 NVMe-MI Commands & Effects Log Page: May Support 00:27:51.735 Data Area 4 for Telemetry Log: Not Supported 00:27:51.735 Error Log Page Entries Supported: 128 00:27:51.735 Keep Alive: Supported 00:27:51.735 Keep Alive Granularity: 1000 ms 00:27:51.735 00:27:51.735 NVM Command Set Attributes 00:27:51.735 ========================== 00:27:51.735 Submission Queue Entry Size 00:27:51.735 Max: 64 00:27:51.735 Min: 64 00:27:51.735 Completion Queue Entry Size 00:27:51.735 Max: 16 00:27:51.735 Min: 16 00:27:51.735 Number of Namespaces: 1024 00:27:51.735 Compare Command: Not Supported 00:27:51.735 Write Uncorrectable Command: Not Supported 00:27:51.735 Dataset Management Command: Supported 00:27:51.735 Write Zeroes Command: Supported 00:27:51.735 Set Features Save Field: Not Supported 00:27:51.735 Reservations: Not Supported 00:27:51.735 Timestamp: Not Supported 00:27:51.735 Copy: Not Supported 00:27:51.735 Volatile Write Cache: Present 00:27:51.735 Atomic Write Unit (Normal): 1 00:27:51.735 Atomic Write Unit (PFail): 1 00:27:51.735 Atomic Compare & Write Unit: 1 00:27:51.735 Fused Compare & Write: Not Supported 00:27:51.735 Scatter-Gather List 00:27:51.735 SGL Command Set: Supported 00:27:51.735 SGL Keyed: Not Supported 00:27:51.735 SGL Bit Bucket Descriptor: Not Supported 00:27:51.735 SGL Metadata Pointer: Not Supported 00:27:51.735 Oversized SGL: Not Supported 00:27:51.735 SGL Metadata Address: Not Supported 00:27:51.735 SGL Offset: Supported 00:27:51.735 Transport SGL Data Block: Not Supported 00:27:51.735 Replay Protected Memory Block: Not Supported 00:27:51.735 00:27:51.735 Firmware Slot Information 00:27:51.735 ========================= 00:27:51.735 Active slot: 0 00:27:51.735 00:27:51.736 Asymmetric Namespace Access 00:27:51.736 =========================== 00:27:51.736 Change Count : 0 00:27:51.736 Number of ANA Group Descriptors : 1 00:27:51.736 ANA Group Descriptor : 0 00:27:51.736 ANA Group ID : 1 00:27:51.736 Number of NSID Values : 1 00:27:51.736 Change Count : 0 00:27:51.736 ANA State : 1 00:27:51.736 Namespace Identifier : 1 00:27:51.736 00:27:51.736 Commands Supported and Effects 00:27:51.736 ============================== 00:27:51.736 Admin Commands 00:27:51.736 -------------- 00:27:51.736 Get Log Page (02h): Supported 00:27:51.736 Identify (06h): Supported 00:27:51.736 Abort (08h): Supported 00:27:51.736 Set Features (09h): Supported 00:27:51.736 Get Features (0Ah): Supported 00:27:51.736 Asynchronous Event Request (0Ch): Supported 00:27:51.736 Keep Alive (18h): Supported 00:27:51.736 I/O Commands 00:27:51.736 ------------ 00:27:51.736 Flush (00h): Supported 00:27:51.736 Write (01h): Supported LBA-Change 00:27:51.736 Read (02h): Supported 00:27:51.736 Write Zeroes (08h): Supported LBA-Change 00:27:51.736 Dataset Management (09h): Supported 00:27:51.736 00:27:51.736 Error Log 00:27:51.736 ========= 00:27:51.736 Entry: 0 00:27:51.736 Error Count: 0x3 00:27:51.736 Submission Queue Id: 0x0 00:27:51.736 Command Id: 0x5 00:27:51.736 Phase Bit: 0 00:27:51.736 Status Code: 0x2 00:27:51.736 Status Code Type: 0x0 00:27:51.736 Do Not Retry: 1 00:27:51.736 Error Location: 0x28 00:27:51.736 LBA: 0x0 00:27:51.736 Namespace: 0x0 00:27:51.736 Vendor Log Page: 0x0 00:27:51.736 ----------- 00:27:51.736 Entry: 1 00:27:51.736 Error Count: 0x2 00:27:51.736 Submission Queue Id: 0x0 00:27:51.736 Command Id: 0x5 00:27:51.736 Phase Bit: 0 00:27:51.736 Status Code: 0x2 00:27:51.736 Status Code Type: 0x0 00:27:51.736 Do Not Retry: 1 00:27:51.736 Error Location: 0x28 00:27:51.736 LBA: 0x0 00:27:51.736 Namespace: 0x0 00:27:51.736 Vendor Log Page: 0x0 00:27:51.736 ----------- 00:27:51.736 Entry: 2 00:27:51.736 Error Count: 0x1 00:27:51.736 Submission Queue Id: 0x0 00:27:51.736 Command Id: 0x4 00:27:51.736 Phase Bit: 0 00:27:51.736 Status Code: 0x2 00:27:51.736 Status Code Type: 0x0 00:27:51.736 Do Not Retry: 1 00:27:51.736 Error Location: 0x28 00:27:51.736 LBA: 0x0 00:27:51.736 Namespace: 0x0 00:27:51.736 Vendor Log Page: 0x0 00:27:51.736 00:27:51.736 Number of Queues 00:27:51.736 ================ 00:27:51.736 Number of I/O Submission Queues: 128 00:27:51.736 Number of I/O Completion Queues: 128 00:27:51.736 00:27:51.736 ZNS Specific Controller Data 00:27:51.736 ============================ 00:27:51.736 Zone Append Size Limit: 0 00:27:51.736 00:27:51.736 00:27:51.736 Active Namespaces 00:27:51.736 ================= 00:27:51.736 get_feature(0x05) failed 00:27:51.736 Namespace ID:1 00:27:51.736 Command Set Identifier: NVM (00h) 00:27:51.736 Deallocate: Supported 00:27:51.736 Deallocated/Unwritten Error: Not Supported 00:27:51.736 Deallocated Read Value: Unknown 00:27:51.736 Deallocate in Write Zeroes: Not Supported 00:27:51.736 Deallocated Guard Field: 0xFFFF 00:27:51.736 Flush: Supported 00:27:51.736 Reservation: Not Supported 00:27:51.736 Namespace Sharing Capabilities: Multiple Controllers 00:27:51.736 Size (in LBAs): 3750748848 (1788GiB) 00:27:51.736 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:51.736 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:51.736 UUID: e8e29c61-f5da-457b-89ad-ae07c24f17b4 00:27:51.736 Thin Provisioning: Not Supported 00:27:51.736 Per-NS Atomic Units: Yes 00:27:51.736 Atomic Write Unit (Normal): 8 00:27:51.736 Atomic Write Unit (PFail): 8 00:27:51.736 Preferred Write Granularity: 8 00:27:51.736 Atomic Compare & Write Unit: 8 00:27:51.736 Atomic Boundary Size (Normal): 0 00:27:51.736 Atomic Boundary Size (PFail): 0 00:27:51.736 Atomic Boundary Offset: 0 00:27:51.736 NGUID/EUI64 Never Reused: No 00:27:51.736 ANA group ID: 1 00:27:51.736 Namespace Write Protected: No 00:27:51.736 Number of LBA Formats: 1 00:27:51.736 Current LBA Format: LBA Format #00 00:27:51.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:51.736 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:51.736 rmmod nvme_tcp 00:27:51.736 rmmod nvme_fabrics 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.736 18:39:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:54.282 18:39:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:57.588 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.588 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:57.589 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:58.162 00:27:58.162 real 0m19.624s 00:27:58.162 user 0m5.458s 00:27:58.162 sys 0m11.160s 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.162 ************************************ 00:27:58.162 END TEST nvmf_identify_kernel_target 00:27:58.162 ************************************ 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.162 ************************************ 00:27:58.162 START TEST nvmf_auth_host 00:27:58.162 ************************************ 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.162 * Looking for test storage... 00:27:58.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:58.162 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:58.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.424 --rc genhtml_branch_coverage=1 00:27:58.424 --rc genhtml_function_coverage=1 00:27:58.424 --rc genhtml_legend=1 00:27:58.424 --rc geninfo_all_blocks=1 00:27:58.424 --rc geninfo_unexecuted_blocks=1 00:27:58.424 00:27:58.424 ' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:58.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.424 --rc genhtml_branch_coverage=1 00:27:58.424 --rc genhtml_function_coverage=1 00:27:58.424 --rc genhtml_legend=1 00:27:58.424 --rc geninfo_all_blocks=1 00:27:58.424 --rc geninfo_unexecuted_blocks=1 00:27:58.424 00:27:58.424 ' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:58.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.424 --rc genhtml_branch_coverage=1 00:27:58.424 --rc genhtml_function_coverage=1 00:27:58.424 --rc genhtml_legend=1 00:27:58.424 --rc geninfo_all_blocks=1 00:27:58.424 --rc geninfo_unexecuted_blocks=1 00:27:58.424 00:27:58.424 ' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:58.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.424 --rc genhtml_branch_coverage=1 00:27:58.424 --rc genhtml_function_coverage=1 00:27:58.424 --rc genhtml_legend=1 00:27:58.424 --rc geninfo_all_blocks=1 00:27:58.424 --rc geninfo_unexecuted_blocks=1 00:27:58.424 00:27:58.424 ' 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.424 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.425 18:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.425 18:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:06.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:06.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:06.646 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:06.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:06.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:06.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:28:06.647 00:28:06.647 --- 10.0.0.2 ping statistics --- 00:28:06.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.647 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:06.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:28:06.647 00:28:06.647 --- 10.0.0.1 ping statistics --- 00:28:06.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.647 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2289340 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2289340 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2289340 ']' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.647 18:40:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.647 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.647 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:06.647 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:06.647 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:06.647 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6a7997f7bbddb533e060940b28eb612e 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yoy 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6a7997f7bbddb533e060940b28eb612e 0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6a7997f7bbddb533e060940b28eb612e 0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6a7997f7bbddb533e060940b28eb612e 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yoy 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yoy 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yoy 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f183749a3f3824b1046281cd446a893d85467463d6a71041138935464fe8a82b 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mOq 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f183749a3f3824b1046281cd446a893d85467463d6a71041138935464fe8a82b 3 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f183749a3f3824b1046281cd446a893d85467463d6a71041138935464fe8a82b 3 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f183749a3f3824b1046281cd446a893d85467463d6a71041138935464fe8a82b 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mOq 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mOq 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.mOq 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d0eecf1ba018c7cd6bd3c10116f23937c204f6632488cbc5 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dTm 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d0eecf1ba018c7cd6bd3c10116f23937c204f6632488cbc5 0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d0eecf1ba018c7cd6bd3c10116f23937c204f6632488cbc5 0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d0eecf1ba018c7cd6bd3c10116f23937c204f6632488cbc5 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dTm 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dTm 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.dTm 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:06.908 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8fdc24a0fae4895054b87f35aa504e91128570d5a5bcab89 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.t24 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8fdc24a0fae4895054b87f35aa504e91128570d5a5bcab89 2 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8fdc24a0fae4895054b87f35aa504e91128570d5a5bcab89 2 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8fdc24a0fae4895054b87f35aa504e91128570d5a5bcab89 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.168 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.t24 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.t24 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.t24 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6124da16ed201e70c8c36168ef96254c 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6b2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6124da16ed201e70c8c36168ef96254c 1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6124da16ed201e70c8c36168ef96254c 1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6124da16ed201e70c8c36168ef96254c 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6b2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6b2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6b2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fdf324631015d705042f831278814326 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.cTT 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fdf324631015d705042f831278814326 1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fdf324631015d705042f831278814326 1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fdf324631015d705042f831278814326 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.cTT 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.cTT 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.cTT 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=52d5c9d69c5d40201a399011b71f3a747f69cfafbb5eba0d 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LTA 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 52d5c9d69c5d40201a399011b71f3a747f69cfafbb5eba0d 2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 52d5c9d69c5d40201a399011b71f3a747f69cfafbb5eba0d 2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=52d5c9d69c5d40201a399011b71f3a747f69cfafbb5eba0d 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.169 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LTA 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LTA 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LTA 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ce1b73fc8a9fb6fad1c127ab8995990b 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.lwW 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ce1b73fc8a9fb6fad1c127ab8995990b 0 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ce1b73fc8a9fb6fad1c127ab8995990b 0 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ce1b73fc8a9fb6fad1c127ab8995990b 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.429 18:40:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.lwW 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.lwW 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.lwW 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c00bac684f7c4fc0250a6bd348fead8e5a0459e87561e7d3ead68f4ec5277baf 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qd3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c00bac684f7c4fc0250a6bd348fead8e5a0459e87561e7d3ead68f4ec5277baf 3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c00bac684f7c4fc0250a6bd348fead8e5a0459e87561e7d3ead68f4ec5277baf 3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c00bac684f7c4fc0250a6bd348fead8e5a0459e87561e7d3ead68f4ec5277baf 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qd3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qd3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qd3 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2289340 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2289340 ']' 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:07.429 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yoy 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.mOq ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mOq 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.dTm 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.t24 ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.t24 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.689 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6b2 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.cTT ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.cTT 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LTA 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.lwW ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.lwW 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qd3 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:07.690 18:40:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:10.991 Waiting for block devices as requested 00:28:11.251 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:11.251 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:11.251 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:11.510 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:11.510 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:11.510 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:11.769 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:11.769 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:11.769 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:12.029 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.029 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.029 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.288 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.288 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.288 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.288 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.549 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:13.490 No valid GPT data, bailing 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:13.490 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:13.491 00:28:13.491 Discovery Log Number of Records 2, Generation counter 2 00:28:13.491 =====Discovery Log Entry 0====== 00:28:13.491 trtype: tcp 00:28:13.491 adrfam: ipv4 00:28:13.491 subtype: current discovery subsystem 00:28:13.491 treq: not specified, sq flow control disable supported 00:28:13.491 portid: 1 00:28:13.491 trsvcid: 4420 00:28:13.491 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:13.491 traddr: 10.0.0.1 00:28:13.491 eflags: none 00:28:13.491 sectype: none 00:28:13.491 =====Discovery Log Entry 1====== 00:28:13.491 trtype: tcp 00:28:13.491 adrfam: ipv4 00:28:13.491 subtype: nvme subsystem 00:28:13.491 treq: not specified, sq flow control disable supported 00:28:13.491 portid: 1 00:28:13.491 trsvcid: 4420 00:28:13.491 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:13.491 traddr: 10.0.0.1 00:28:13.491 eflags: none 00:28:13.491 sectype: none 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.491 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.754 nvme0n1 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.754 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.016 nvme0n1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.016 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.277 nvme0n1 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.277 18:40:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.538 nvme0n1 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.538 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.539 nvme0n1 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.539 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.800 nvme0n1 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.800 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.801 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.801 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.801 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.801 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.062 nvme0n1 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.062 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.323 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.324 18:40:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.324 nvme0n1 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.324 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.585 nvme0n1 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.585 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 nvme0n1 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.847 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.109 nvme0n1 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.109 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.371 18:40:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.632 nvme0n1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.632 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.946 nvme0n1 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.946 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.209 nvme0n1 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.209 18:40:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 nvme0n1 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.731 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.992 nvme0n1 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.992 18:40:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.562 nvme0n1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.562 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.822 nvme0n1 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.822 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:19.084 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.085 18:40:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.347 nvme0n1 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.347 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.608 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.869 nvme0n1 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.869 18:40:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.440 nvme0n1 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.440 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.010 nvme0n1 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.010 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.270 18:40:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.843 nvme0n1 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.843 18:40:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.788 nvme0n1 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.788 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.360 nvme0n1 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.360 18:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.360 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.931 nvme0n1 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.931 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.192 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 nvme0n1 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.193 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.454 18:40:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.454 nvme0n1 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.454 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.455 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.716 nvme0n1 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.716 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.977 nvme0n1 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.978 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.238 nvme0n1 00:28:25.238 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.238 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.239 18:40:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.499 nvme0n1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.500 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 nvme0n1 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.761 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.022 nvme0n1 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.022 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.023 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.283 nvme0n1 00:28:26.283 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.283 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.283 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.283 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.283 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.284 18:40:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 nvme0n1 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.544 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 nvme0n1 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.806 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.807 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.067 nvme0n1 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.067 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.328 18:40:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.589 nvme0n1 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.589 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.850 nvme0n1 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.850 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.111 nvme0n1 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.111 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.372 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.372 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.373 18:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.633 nvme0n1 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.633 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.634 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.894 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.155 nvme0n1 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.155 18:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.726 nvme0n1 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.726 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.727 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 nvme0n1 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.298 18:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.558 nvme0n1 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.558 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:30.817 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.818 18:40:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.387 nvme0n1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.387 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.325 nvme0n1 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.325 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.326 18:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 nvme0n1 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.894 18:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.463 nvme0n1 00:28:33.463 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.463 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.463 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.463 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.463 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.464 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.723 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.724 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.294 nvme0n1 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.294 18:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.294 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.555 nvme0n1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.555 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.817 nvme0n1 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.817 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.079 nvme0n1 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.079 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 nvme0n1 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 18:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 nvme0n1 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.602 nvme0n1 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.602 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.863 nvme0n1 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.863 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.125 nvme0n1 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.125 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.386 18:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.386 nvme0n1 00:28:36.386 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.386 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.386 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.386 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.386 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.647 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.647 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.647 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.647 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.648 nvme0n1 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.648 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.909 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.170 nvme0n1 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:37.170 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.171 18:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.433 nvme0n1 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.433 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.695 nvme0n1 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.695 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.955 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.216 nvme0n1 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.216 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.217 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.217 18:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.477 nvme0n1 00:28:38.477 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.477 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.478 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.050 nvme0n1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.050 18:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.623 nvme0n1 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.623 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.624 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.885 nvme0n1 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.885 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.146 18:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.423 nvme0n1 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:40.423 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.424 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.995 nvme0n1 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmE3OTk3ZjdiYmRkYjUzM2UwNjA5NDBiMjhlYjYxMmVyMf4F: 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjE4Mzc0OWEzZjM4MjRiMTA0NjI4MWNkNDQ2YTg5M2Q4NTQ2NzQ2M2Q2YTcxMDQxMTM4OTM1NDY0ZmU4YTgyYihI5lo=: 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.995 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.996 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.996 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.996 18:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.567 nvme0n1 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.567 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.829 18:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.458 nvme0n1 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.458 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.035 nvme0n1 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.035 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.369 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTJkNWM5ZDY5YzVkNDAyMDFhMzk5MDExYjcxZjNhNzQ3ZjY5Y2ZhZmJiNWViYTBkh+hiTg==: 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: ]] 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2UxYjczZmM4YTlmYjZmYWQxYzEyN2FiODk5NTk5MGKORawi: 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.370 18:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.039 nvme0n1 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzAwYmFjNjg0ZjdjNGZjMDI1MGE2YmQzNDhmZWFkOGU1YTA0NTllODc1NjFlN2QzZWFkNjhmNGVjNTI3N2JhZu2KRds=: 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.039 18:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.610 nvme0n1 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:44.610 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.611 request: 00:28:44.611 { 00:28:44.611 "name": "nvme0", 00:28:44.611 "trtype": "tcp", 00:28:44.611 "traddr": "10.0.0.1", 00:28:44.611 "adrfam": "ipv4", 00:28:44.611 "trsvcid": "4420", 00:28:44.611 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:44.611 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:44.611 "prchk_reftag": false, 00:28:44.611 "prchk_guard": false, 00:28:44.611 "hdgst": false, 00:28:44.611 "ddgst": false, 00:28:44.611 "allow_unrecognized_csi": false, 00:28:44.611 "method": "bdev_nvme_attach_controller", 00:28:44.611 "req_id": 1 00:28:44.611 } 00:28:44.611 Got JSON-RPC error response 00:28:44.611 response: 00:28:44.611 { 00:28:44.611 "code": -5, 00:28:44.611 "message": "Input/output error" 00:28:44.611 } 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.611 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.872 request: 00:28:44.872 { 00:28:44.872 "name": "nvme0", 00:28:44.872 "trtype": "tcp", 00:28:44.872 "traddr": "10.0.0.1", 00:28:44.872 "adrfam": "ipv4", 00:28:44.872 "trsvcid": "4420", 00:28:44.872 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:44.872 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:44.872 "prchk_reftag": false, 00:28:44.872 "prchk_guard": false, 00:28:44.872 "hdgst": false, 00:28:44.872 "ddgst": false, 00:28:44.872 "dhchap_key": "key2", 00:28:44.872 "allow_unrecognized_csi": false, 00:28:44.872 "method": "bdev_nvme_attach_controller", 00:28:44.872 "req_id": 1 00:28:44.872 } 00:28:44.872 Got JSON-RPC error response 00:28:44.872 response: 00:28:44.872 { 00:28:44.872 "code": -5, 00:28:44.872 "message": "Input/output error" 00:28:44.872 } 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.872 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.873 request: 00:28:44.873 { 00:28:44.873 "name": "nvme0", 00:28:44.873 "trtype": "tcp", 00:28:44.873 "traddr": "10.0.0.1", 00:28:44.873 "adrfam": "ipv4", 00:28:44.873 "trsvcid": "4420", 00:28:44.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:44.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:44.873 "prchk_reftag": false, 00:28:44.873 "prchk_guard": false, 00:28:44.873 "hdgst": false, 00:28:44.873 "ddgst": false, 00:28:44.873 "dhchap_key": "key1", 00:28:44.873 "dhchap_ctrlr_key": "ckey2", 00:28:44.873 "allow_unrecognized_csi": false, 00:28:44.873 "method": "bdev_nvme_attach_controller", 00:28:44.873 "req_id": 1 00:28:44.873 } 00:28:44.873 Got JSON-RPC error response 00:28:44.873 response: 00:28:44.873 { 00:28:44.873 "code": -5, 00:28:44.873 "message": "Input/output error" 00:28:44.873 } 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.873 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.133 nvme0n1 00:28:45.133 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.133 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.134 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.394 request: 00:28:45.395 { 00:28:45.395 "name": "nvme0", 00:28:45.395 "dhchap_key": "key1", 00:28:45.395 "dhchap_ctrlr_key": "ckey2", 00:28:45.395 "method": "bdev_nvme_set_keys", 00:28:45.395 "req_id": 1 00:28:45.395 } 00:28:45.395 Got JSON-RPC error response 00:28:45.395 response: 00:28:45.395 { 00:28:45.395 "code": -13, 00:28:45.395 "message": "Permission denied" 00:28:45.395 } 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:45.395 18:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:46.338 18:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:46.338 18:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:47.279 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.279 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:47.279 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.279 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDBlZWNmMWJhMDE4YzdjZDZiZDNjMTAxMTZmMjM5MzdjMjA0ZjY2MzI0ODhjYmM1MjSEoQ==: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkYzI0YTBmYWU0ODk1MDU0Yjg3ZjM1YWE1MDRlOTExMjg1NzBkNWE1YmNhYjg5WkN5bQ==: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.540 nvme0n1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjEyNGRhMTZlZDIwMWU3MGM4YzM2MTY4ZWY5NjI1NGOFSLtG: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: ]] 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmRmMzI0NjMxMDE1ZDcwNTA0MmY4MzEyNzg4MTQzMjbuTqRr: 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.540 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:47.541 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:47.541 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:47.541 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.800 request: 00:28:47.800 { 00:28:47.800 "name": "nvme0", 00:28:47.800 "dhchap_key": "key2", 00:28:47.800 "dhchap_ctrlr_key": "ckey1", 00:28:47.800 "method": "bdev_nvme_set_keys", 00:28:47.800 "req_id": 1 00:28:47.800 } 00:28:47.800 Got JSON-RPC error response 00:28:47.800 response: 00:28:47.800 { 00:28:47.800 "code": -13, 00:28:47.800 "message": "Permission denied" 00:28:47.800 } 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:47.801 18:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.745 rmmod nvme_tcp 00:28:48.745 rmmod nvme_fabrics 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2289340 ']' 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2289340 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2289340 ']' 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2289340 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.745 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2289340 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2289340' 00:28:49.006 killing process with pid 2289340 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2289340 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2289340 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.006 18:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:51.563 18:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:54.857 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:54.857 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:55.116 18:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yoy /tmp/spdk.key-null.dTm /tmp/spdk.key-sha256.6b2 /tmp/spdk.key-sha384.LTA /tmp/spdk.key-sha512.qd3 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:55.116 18:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:59.320 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:59.320 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:59.320 00:28:59.320 real 1m0.914s 00:28:59.320 user 0m54.716s 00:28:59.320 sys 0m16.107s 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.320 ************************************ 00:28:59.320 END TEST nvmf_auth_host 00:28:59.320 ************************************ 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.320 18:40:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.320 ************************************ 00:28:59.320 START TEST nvmf_digest 00:28:59.320 ************************************ 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:59.321 * Looking for test storage... 00:28:59.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.321 --rc genhtml_branch_coverage=1 00:28:59.321 --rc genhtml_function_coverage=1 00:28:59.321 --rc genhtml_legend=1 00:28:59.321 --rc geninfo_all_blocks=1 00:28:59.321 --rc geninfo_unexecuted_blocks=1 00:28:59.321 00:28:59.321 ' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.321 --rc genhtml_branch_coverage=1 00:28:59.321 --rc genhtml_function_coverage=1 00:28:59.321 --rc genhtml_legend=1 00:28:59.321 --rc geninfo_all_blocks=1 00:28:59.321 --rc geninfo_unexecuted_blocks=1 00:28:59.321 00:28:59.321 ' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.321 --rc genhtml_branch_coverage=1 00:28:59.321 --rc genhtml_function_coverage=1 00:28:59.321 --rc genhtml_legend=1 00:28:59.321 --rc geninfo_all_blocks=1 00:28:59.321 --rc geninfo_unexecuted_blocks=1 00:28:59.321 00:28:59.321 ' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:59.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.321 --rc genhtml_branch_coverage=1 00:28:59.321 --rc genhtml_function_coverage=1 00:28:59.321 --rc genhtml_legend=1 00:28:59.321 --rc geninfo_all_blocks=1 00:28:59.321 --rc geninfo_unexecuted_blocks=1 00:28:59.321 00:28:59.321 ' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.321 18:40:53 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:59.321 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.322 18:40:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:07.467 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.468 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:29:07.468 00:29:07.469 --- 10.0.0.2 ping statistics --- 00:29:07.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.469 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:29:07.469 00:29:07.469 --- 10.0.0.1 ping statistics --- 00:29:07.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.469 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.469 ************************************ 00:29:07.469 START TEST nvmf_digest_clean 00:29:07.469 ************************************ 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2306325 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2306325 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2306325 ']' 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.469 18:41:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.469 [2024-12-06 18:41:01.606454] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:07.469 [2024-12-06 18:41:01.606516] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.469 [2024-12-06 18:41:01.706632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.469 [2024-12-06 18:41:01.756916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.469 [2024-12-06 18:41:01.756971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.469 [2024-12-06 18:41:01.756980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.469 [2024-12-06 18:41:01.756988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.469 [2024-12-06 18:41:01.756994] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.469 [2024-12-06 18:41:01.757759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.731 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.732 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.995 null0 00:29:07.995 [2024-12-06 18:41:02.578756] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.995 [2024-12-06 18:41:02.603017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2306523 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2306523 /var/tmp/bperf.sock 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2306523 ']' 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.995 18:41:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.995 [2024-12-06 18:41:02.661984] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:07.995 [2024-12-06 18:41:02.662052] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306523 ] 00:29:07.995 [2024-12-06 18:41:02.752474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.257 [2024-12-06 18:41:02.805478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.830 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.830 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:08.830 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:08.830 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:08.830 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.092 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.092 18:41:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.666 nvme0n1 00:29:09.666 18:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:09.666 18:41:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.666 Running I/O for 2 seconds... 00:29:11.550 18863.00 IOPS, 73.68 MiB/s [2024-12-06T17:41:06.334Z] 20145.50 IOPS, 78.69 MiB/s 00:29:11.550 Latency(us) 00:29:11.550 [2024-12-06T17:41:06.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.550 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:11.550 nvme0n1 : 2.01 20183.28 78.84 0.00 0.00 6333.12 2198.19 22937.60 00:29:11.550 [2024-12-06T17:41:06.334Z] =================================================================================================================== 00:29:11.550 [2024-12-06T17:41:06.334Z] Total : 20183.28 78.84 0.00 0.00 6333.12 2198.19 22937.60 00:29:11.550 { 00:29:11.550 "results": [ 00:29:11.550 { 00:29:11.550 "job": "nvme0n1", 00:29:11.550 "core_mask": "0x2", 00:29:11.550 "workload": "randread", 00:29:11.550 "status": "finished", 00:29:11.550 "queue_depth": 128, 00:29:11.550 "io_size": 4096, 00:29:11.550 "runtime": 2.007008, 00:29:11.550 "iops": 20183.277794607693, 00:29:11.550 "mibps": 78.8409288851863, 00:29:11.550 "io_failed": 0, 00:29:11.550 "io_timeout": 0, 00:29:11.550 "avg_latency_us": 6333.12485566637, 00:29:11.550 "min_latency_us": 2198.1866666666665, 00:29:11.550 "max_latency_us": 22937.6 00:29:11.550 } 00:29:11.550 ], 00:29:11.550 "core_count": 1 00:29:11.550 } 00:29:11.550 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:11.550 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:11.550 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.550 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.550 | select(.opcode=="crc32c") 00:29:11.550 | "\(.module_name) \(.executed)"' 00:29:11.550 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2306523 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2306523 ']' 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2306523 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306523 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306523' 00:29:11.809 killing process with pid 2306523 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2306523 00:29:11.809 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.809 00:29:11.809 Latency(us) 00:29:11.809 [2024-12-06T17:41:06.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.809 [2024-12-06T17:41:06.593Z] =================================================================================================================== 00:29:11.809 [2024-12-06T17:41:06.593Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.809 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2306523 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2307354 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2307354 /var/tmp/bperf.sock 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2307354 ']' 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:12.069 18:41:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:12.069 [2024-12-06 18:41:06.694220] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:12.069 [2024-12-06 18:41:06.694277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2307354 ] 00:29:12.069 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.069 Zero copy mechanism will not be used. 00:29:12.069 [2024-12-06 18:41:06.775654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.069 [2024-12-06 18:41:06.804916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.009 18:41:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.269 nvme0n1 00:29:13.529 18:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:13.529 18:41:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.529 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.529 Zero copy mechanism will not be used. 00:29:13.529 Running I/O for 2 seconds... 00:29:15.413 3137.00 IOPS, 392.12 MiB/s [2024-12-06T17:41:10.197Z] 3220.00 IOPS, 402.50 MiB/s 00:29:15.413 Latency(us) 00:29:15.413 [2024-12-06T17:41:10.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.413 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:15.413 nvme0n1 : 2.00 3223.81 402.98 0.00 0.00 4960.22 955.73 9721.17 00:29:15.413 [2024-12-06T17:41:10.197Z] =================================================================================================================== 00:29:15.413 [2024-12-06T17:41:10.197Z] Total : 3223.81 402.98 0.00 0.00 4960.22 955.73 9721.17 00:29:15.413 { 00:29:15.413 "results": [ 00:29:15.413 { 00:29:15.413 "job": "nvme0n1", 00:29:15.413 "core_mask": "0x2", 00:29:15.413 "workload": "randread", 00:29:15.413 "status": "finished", 00:29:15.413 "queue_depth": 16, 00:29:15.413 "io_size": 131072, 00:29:15.413 "runtime": 2.002602, 00:29:15.413 "iops": 3223.8058286169694, 00:29:15.413 "mibps": 402.9757285771212, 00:29:15.413 "io_failed": 0, 00:29:15.413 "io_timeout": 0, 00:29:15.413 "avg_latency_us": 4960.224171829822, 00:29:15.413 "min_latency_us": 955.7333333333333, 00:29:15.413 "max_latency_us": 9721.173333333334 00:29:15.413 } 00:29:15.413 ], 00:29:15.413 "core_count": 1 00:29:15.413 } 00:29:15.413 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:15.413 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:15.413 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.413 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.413 | select(.opcode=="crc32c") 00:29:15.413 | "\(.module_name) \(.executed)"' 00:29:15.413 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2307354 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2307354 ']' 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2307354 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2307354 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2307354' 00:29:15.674 killing process with pid 2307354 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2307354 00:29:15.674 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.674 00:29:15.674 Latency(us) 00:29:15.674 [2024-12-06T17:41:10.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.674 [2024-12-06T17:41:10.458Z] =================================================================================================================== 00:29:15.674 [2024-12-06T17:41:10.458Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.674 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2307354 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2308040 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2308040 /var/tmp/bperf.sock 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2308040 ']' 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:15.934 18:41:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:15.934 [2024-12-06 18:41:10.570234] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:15.935 [2024-12-06 18:41:10.570287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308040 ] 00:29:15.935 [2024-12-06 18:41:10.653465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.935 [2024-12-06 18:41:10.681461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.876 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.136 nvme0n1 00:29:17.136 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:17.136 18:41:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.397 Running I/O for 2 seconds... 00:29:19.282 30508.00 IOPS, 119.17 MiB/s [2024-12-06T17:41:14.066Z] 30166.00 IOPS, 117.84 MiB/s 00:29:19.282 Latency(us) 00:29:19.282 [2024-12-06T17:41:14.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.282 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.282 nvme0n1 : 2.00 30167.12 117.84 0.00 0.00 4236.14 2280.11 9065.81 00:29:19.282 [2024-12-06T17:41:14.066Z] =================================================================================================================== 00:29:19.282 [2024-12-06T17:41:14.066Z] Total : 30167.12 117.84 0.00 0.00 4236.14 2280.11 9065.81 00:29:19.282 { 00:29:19.282 "results": [ 00:29:19.282 { 00:29:19.282 "job": "nvme0n1", 00:29:19.282 "core_mask": "0x2", 00:29:19.282 "workload": "randwrite", 00:29:19.282 "status": "finished", 00:29:19.282 "queue_depth": 128, 00:29:19.282 "io_size": 4096, 00:29:19.282 "runtime": 2.004169, 00:29:19.282 "iops": 30167.116645352762, 00:29:19.282 "mibps": 117.84029939590923, 00:29:19.282 "io_failed": 0, 00:29:19.282 "io_timeout": 0, 00:29:19.282 "avg_latency_us": 4236.135907376778, 00:29:19.282 "min_latency_us": 2280.1066666666666, 00:29:19.282 "max_latency_us": 9065.813333333334 00:29:19.282 } 00:29:19.282 ], 00:29:19.282 "core_count": 1 00:29:19.282 } 00:29:19.282 18:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:19.282 18:41:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:19.282 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:19.282 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:19.282 | select(.opcode=="crc32c") 00:29:19.282 | "\(.module_name) \(.executed)"' 00:29:19.282 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2308040 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2308040 ']' 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2308040 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308040 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308040' 00:29:19.542 killing process with pid 2308040 00:29:19.542 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2308040 00:29:19.542 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.542 00:29:19.542 Latency(us) 00:29:19.542 [2024-12-06T17:41:14.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.542 [2024-12-06T17:41:14.326Z] =================================================================================================================== 00:29:19.542 [2024-12-06T17:41:14.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.543 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2308040 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2308726 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2308726 /var/tmp/bperf.sock 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2308726 ']' 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.803 18:41:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:19.803 [2024-12-06 18:41:14.410422] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:19.803 [2024-12-06 18:41:14.410478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2308726 ] 00:29:19.803 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.803 Zero copy mechanism will not be used. 00:29:19.803 [2024-12-06 18:41:14.494570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.803 [2024-12-06 18:41:14.523882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.742 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.001 nvme0n1 00:29:21.001 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.001 18:41:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.261 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.261 Zero copy mechanism will not be used. 00:29:21.261 Running I/O for 2 seconds... 00:29:23.160 3628.00 IOPS, 453.50 MiB/s [2024-12-06T17:41:17.944Z] 4047.50 IOPS, 505.94 MiB/s 00:29:23.160 Latency(us) 00:29:23.160 [2024-12-06T17:41:17.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.160 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:23.160 nvme0n1 : 2.00 4051.55 506.44 0.00 0.00 3945.26 1160.53 12615.68 00:29:23.160 [2024-12-06T17:41:17.944Z] =================================================================================================================== 00:29:23.160 [2024-12-06T17:41:17.944Z] Total : 4051.55 506.44 0.00 0.00 3945.26 1160.53 12615.68 00:29:23.160 { 00:29:23.160 "results": [ 00:29:23.160 { 00:29:23.160 "job": "nvme0n1", 00:29:23.160 "core_mask": "0x2", 00:29:23.160 "workload": "randwrite", 00:29:23.160 "status": "finished", 00:29:23.160 "queue_depth": 16, 00:29:23.160 "io_size": 131072, 00:29:23.160 "runtime": 2.002688, 00:29:23.160 "iops": 4051.5547104691295, 00:29:23.160 "mibps": 506.4443388086412, 00:29:23.160 "io_failed": 0, 00:29:23.160 "io_timeout": 0, 00:29:23.160 "avg_latency_us": 3945.255522142799, 00:29:23.160 "min_latency_us": 1160.5333333333333, 00:29:23.160 "max_latency_us": 12615.68 00:29:23.160 } 00:29:23.160 ], 00:29:23.160 "core_count": 1 00:29:23.160 } 00:29:23.160 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:23.160 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:23.160 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:23.160 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:23.160 | select(.opcode=="crc32c") 00:29:23.160 | "\(.module_name) \(.executed)"' 00:29:23.160 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2308726 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2308726 ']' 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2308726 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:23.421 18:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2308726 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2308726' 00:29:23.421 killing process with pid 2308726 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2308726 00:29:23.421 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.421 00:29:23.421 Latency(us) 00:29:23.421 [2024-12-06T17:41:18.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.421 [2024-12-06T17:41:18.205Z] =================================================================================================================== 00:29:23.421 [2024-12-06T17:41:18.205Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2308726 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2306325 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2306325 ']' 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2306325 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.421 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306325 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306325' 00:29:23.681 killing process with pid 2306325 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2306325 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2306325 00:29:23.681 00:29:23.681 real 0m16.781s 00:29:23.681 user 0m33.187s 00:29:23.681 sys 0m3.691s 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:23.681 ************************************ 00:29:23.681 END TEST nvmf_digest_clean 00:29:23.681 ************************************ 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.681 ************************************ 00:29:23.681 START TEST nvmf_digest_error 00:29:23.681 ************************************ 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.681 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2309529 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2309529 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2309529 ']' 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.682 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.682 [2024-12-06 18:41:18.463555] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:23.682 [2024-12-06 18:41:18.463598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.943 [2024-12-06 18:41:18.518194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.943 [2024-12-06 18:41:18.546915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.943 [2024-12-06 18:41:18.546944] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.943 [2024-12-06 18:41:18.546949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.943 [2024-12-06 18:41:18.546954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.943 [2024-12-06 18:41:18.546958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.943 [2024-12-06 18:41:18.547417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.943 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.944 [2024-12-06 18:41:18.635846] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.944 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:23.944 null0 00:29:23.944 [2024-12-06 18:41:18.714695] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.204 [2024-12-06 18:41:18.738867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2309700 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2309700 /var/tmp/bperf.sock 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2309700 ']' 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.204 18:41:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.204 [2024-12-06 18:41:18.796788] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:24.204 [2024-12-06 18:41:18.796838] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2309700 ] 00:29:24.204 [2024-12-06 18:41:18.878714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.204 [2024-12-06 18:41:18.908533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.142 18:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.402 nvme0n1 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:25.402 18:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.402 Running I/O for 2 seconds... 00:29:25.402 [2024-12-06 18:41:20.116521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.402 [2024-12-06 18:41:20.116556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.402 [2024-12-06 18:41:20.116565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.402 [2024-12-06 18:41:20.125470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.402 [2024-12-06 18:41:20.125492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.125499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.137487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.137506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.137513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.146597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.146616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.146623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.155601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.155620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.164362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.164381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.164388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.173159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.173178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.173192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.403 [2024-12-06 18:41:20.182789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.403 [2024-12-06 18:41:20.182808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.403 [2024-12-06 18:41:20.182814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.663 [2024-12-06 18:41:20.192764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.663 [2024-12-06 18:41:20.192783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.663 [2024-12-06 18:41:20.192790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.663 [2024-12-06 18:41:20.201010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.663 [2024-12-06 18:41:20.201028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.663 [2024-12-06 18:41:20.201035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.663 [2024-12-06 18:41:20.213157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.663 [2024-12-06 18:41:20.213176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.663 [2024-12-06 18:41:20.213183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.663 [2024-12-06 18:41:20.223516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.663 [2024-12-06 18:41:20.223535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.223542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.233335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.233353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.233360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.241753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.241771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.241777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.250437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.250456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.250463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.259502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.259531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.269119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.269138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.269144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.281452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.281470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.281477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.289243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.289262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.289269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.299420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.299438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.299445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.308358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.308376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.308382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.317704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.317722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.317729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.326512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.326530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.326536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.335513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.335531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.335538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.344472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.344489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.344496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.352943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.352961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.363269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.363287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.363294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.372626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.372649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.372656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.381965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.381991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.390777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.390794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.390801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.401307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.401325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.401332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.409831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.664 [2024-12-06 18:41:20.409849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.664 [2024-12-06 18:41:20.409856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.664 [2024-12-06 18:41:20.418850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.665 [2024-12-06 18:41:20.418868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.665 [2024-12-06 18:41:20.418878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.665 [2024-12-06 18:41:20.427817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.665 [2024-12-06 18:41:20.427835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.665 [2024-12-06 18:41:20.427842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.665 [2024-12-06 18:41:20.435893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.665 [2024-12-06 18:41:20.435912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.665 [2024-12-06 18:41:20.435918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.665 [2024-12-06 18:41:20.445126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.665 [2024-12-06 18:41:20.445145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.665 [2024-12-06 18:41:20.445152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.453630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.453653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.453659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.463039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.463057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.463063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.471603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.471622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.471629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.481092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.481110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.481116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.489974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.489992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.489999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.498549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.498570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.498576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.507592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.507610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.507617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.515696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.515715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.515722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.525387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.525406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.525412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.534331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.534350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.534357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.542835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.542853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.552498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.552516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.552523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.561164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.561183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.570323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.570341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.570351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.579601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.579620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.579626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.588509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.588528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.588535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.596954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.596972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.596979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.605957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.605982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.933 [2024-12-06 18:41:20.614738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.933 [2024-12-06 18:41:20.614755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.933 [2024-12-06 18:41:20.614762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.622818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.622836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.622843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.633088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.633106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.633113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.642666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.642684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.642691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.651879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.651901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.651907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.660493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.660511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.660518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.669586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.669604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.669610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.678105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.678122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.678129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.687909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.687927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.687933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.697580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.697598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.697604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.706119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.706137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.706144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.934 [2024-12-06 18:41:20.714266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:25.934 [2024-12-06 18:41:20.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.934 [2024-12-06 18:41:20.714290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.725940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.725958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.725965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.736234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.736251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.736257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.745770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.745788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.745795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.754081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.754099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.754105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.764260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.764278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.772280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.772297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.772304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.781316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.781334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.781341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.790905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.790922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.194 [2024-12-06 18:41:20.790930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.194 [2024-12-06 18:41:20.799090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.194 [2024-12-06 18:41:20.799107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.799114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.809116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.809134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.809144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.816833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.816850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.816857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.825838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.825856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.825863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.837542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.837560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.837567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.848882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.848900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.848907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.860068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.860086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.860092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.868226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.868243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.868250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.877299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.877317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.877323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.885758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.885776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.885783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.894349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.894370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.894377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.902777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.902795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.902801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.912326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.912343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.912350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.921342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.921360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.921366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.930187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.930205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.930211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.939358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.939376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.939382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.947914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.947938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.956015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.956032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.956039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.965749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.965766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.965773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.195 [2024-12-06 18:41:20.974068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.195 [2024-12-06 18:41:20.974086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.195 [2024-12-06 18:41:20.974093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.455 [2024-12-06 18:41:20.984011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:20.984029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:20.984036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:20.992305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:20.992323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:20.992329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.001698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.001716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.010714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.010732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.010738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.018885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.018903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.018909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.028832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.028849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.028856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.038407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.038425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.038431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.046601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.046622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.046629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.055521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.055538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.055545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.065183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.065200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.065207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.075395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.075412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.075418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.082822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.082840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.082847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.092625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.092647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.092654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 27500.00 IOPS, 107.42 MiB/s [2024-12-06T17:41:21.240Z] [2024-12-06 18:41:21.104114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.104131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.104137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.115745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.115763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.115769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.123906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.123924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.123930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.135689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.135713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.146589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.146606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.146613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.155409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.155427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.155433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.165139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.165158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.165166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.173769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.173787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.173793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.182222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.182240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.182247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.192031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.192048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.192055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.200520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.200537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.200544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.208839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.208856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.208866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.217783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.217801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.227529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.227547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.227553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.456 [2024-12-06 18:41:21.235850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.456 [2024-12-06 18:41:21.235867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.456 [2024-12-06 18:41:21.235873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.244295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.244313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.244319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.253824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.253841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.253848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.264160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.264177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.264184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.273029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.273046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.273052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.281871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.281888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.281895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.290290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.290311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.290318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.717 [2024-12-06 18:41:21.299482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.717 [2024-12-06 18:41:21.299500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.717 [2024-12-06 18:41:21.299507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.308449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.308467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.308473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.316880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.316897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.316904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.325982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.325999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.326006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.335631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.335653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.335661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.344232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.344249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.344255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.352522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.352539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.352546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.363212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.363229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.363236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.374020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.374037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.374043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.382485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.382508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.391842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.391859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.391865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.401243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.401261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.401267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.410276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.410293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.410300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.421321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.421338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.421344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.430074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.430091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.430098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.439601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.439618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.439624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.449410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.449430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.449436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.459323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.459340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.459346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.468980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.468997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.469003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.476188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.476205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.476211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.485648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.485665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.485672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.718 [2024-12-06 18:41:21.495369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.718 [2024-12-06 18:41:21.495386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.718 [2024-12-06 18:41:21.495393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.504204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.504222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.504229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.512936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.512953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.512960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.522153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.522170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.522176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.531544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.531562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.540830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.540847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.540854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.548536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.548554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.548560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.558372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.558390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.558396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.567256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.567274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.980 [2024-12-06 18:41:21.567281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.980 [2024-12-06 18:41:21.576283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.980 [2024-12-06 18:41:21.576300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.576307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.584924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.584941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.584948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.594109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.594126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.603037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.603055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.603066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.613455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.613472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.613479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.621741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.621759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.621766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.630414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.630432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.630438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.639556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.639573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.639579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.648092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.648109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.648115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.657221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.657238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.657245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.666319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.666337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.666344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.674762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.674779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.684139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.684160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.684167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.692810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.692827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.692834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.702255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.702272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.702279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.710132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.710150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.710157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.719610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.719627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.719634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.731666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.731684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.731690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.740496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.740513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.740520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.749475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.749492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.749498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.981 [2024-12-06 18:41:21.758778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:26.981 [2024-12-06 18:41:21.758795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.981 [2024-12-06 18:41:21.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.243 [2024-12-06 18:41:21.766520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.243 [2024-12-06 18:41:21.766537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.243 [2024-12-06 18:41:21.766544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.243 [2024-12-06 18:41:21.776782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.243 [2024-12-06 18:41:21.776800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.243 [2024-12-06 18:41:21.776806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.243 [2024-12-06 18:41:21.785244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.243 [2024-12-06 18:41:21.785262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.243 [2024-12-06 18:41:21.785269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.243 [2024-12-06 18:41:21.794220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.243 [2024-12-06 18:41:21.794238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.243 [2024-12-06 18:41:21.794244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.243 [2024-12-06 18:41:21.803016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.243 [2024-12-06 18:41:21.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.243 [2024-12-06 18:41:21.803040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.812121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.812139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.812146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.820924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.820942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.820948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.830501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.830518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.830525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.839195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.839213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.839223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.847842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.847860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.847866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.857409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.857426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.857433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.865991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.866009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.866015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.874506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.874524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.874531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.884012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.884030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.884036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.892925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.892942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.892948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.901025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.901043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.901050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.909888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.909905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.909912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.920033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.920050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.920057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.928332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.928350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.928356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.937521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.937539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.937546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.946214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.946231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.946237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.955085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.955103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.955109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.964878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.964896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.964902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.973355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.973372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.973380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.982219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.982236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.982243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:21.991676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:21.991693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:21.991703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:22.001209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:22.001226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:22.001233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:22.009736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:22.009753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.244 [2024-12-06 18:41:22.009760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.244 [2024-12-06 18:41:22.018437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.244 [2024-12-06 18:41:22.018455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.245 [2024-12-06 18:41:22.018462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.505 [2024-12-06 18:41:22.027225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.505 [2024-12-06 18:41:22.027243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.505 [2024-12-06 18:41:22.027249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.505 [2024-12-06 18:41:22.035641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.505 [2024-12-06 18:41:22.035658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.505 [2024-12-06 18:41:22.035665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.044726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.044743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.044750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.053385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.053402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.053409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.062232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.062249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.062256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.071719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.071741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.071747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.079909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.079926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.079933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.089446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.089463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.089469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 [2024-12-06 18:41:22.097646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.097664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.097671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 27758.50 IOPS, 108.43 MiB/s [2024-12-06T17:41:22.290Z] [2024-12-06 18:41:22.107491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a37d60) 00:29:27.506 [2024-12-06 18:41:22.107510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.506 [2024-12-06 18:41:22.107516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.506 00:29:27.506 Latency(us) 00:29:27.506 [2024-12-06T17:41:22.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.506 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:27.506 nvme0n1 : 2.04 27210.72 106.29 0.00 0.00 4606.00 2129.92 46312.11 00:29:27.506 [2024-12-06T17:41:22.290Z] =================================================================================================================== 00:29:27.506 [2024-12-06T17:41:22.290Z] Total : 27210.72 106.29 0.00 0.00 4606.00 2129.92 46312.11 00:29:27.506 { 00:29:27.506 "results": [ 00:29:27.506 { 00:29:27.506 "job": "nvme0n1", 00:29:27.506 "core_mask": "0x2", 00:29:27.506 "workload": "randread", 00:29:27.506 "status": "finished", 00:29:27.506 "queue_depth": 128, 00:29:27.506 "io_size": 4096, 00:29:27.506 "runtime": 2.044966, 00:29:27.506 "iops": 27210.721351846438, 00:29:27.506 "mibps": 106.29188028065015, 00:29:27.506 "io_failed": 0, 00:29:27.506 "io_timeout": 0, 00:29:27.506 "avg_latency_us": 4606.0027292059785, 00:29:27.506 "min_latency_us": 2129.92, 00:29:27.506 "max_latency_us": 46312.10666666667 00:29:27.506 } 00:29:27.506 ], 00:29:27.506 "core_count": 1 00:29:27.506 } 00:29:27.506 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:27.506 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:27.506 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:27.506 | .driver_specific 00:29:27.506 | .nvme_error 00:29:27.506 | .status_code 00:29:27.506 | .command_transient_transport_error' 00:29:27.506 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2309700 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2309700 ']' 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2309700 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309700 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309700' 00:29:27.767 killing process with pid 2309700 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2309700 00:29:27.767 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.767 00:29:27.767 Latency(us) 00:29:27.767 [2024-12-06T17:41:22.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.767 [2024-12-06T17:41:22.551Z] =================================================================================================================== 00:29:27.767 [2024-12-06T17:41:22.551Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2309700 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2310461 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2310461 /var/tmp/bperf.sock 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2310461 ']' 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.767 18:41:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.027 [2024-12-06 18:41:22.567681] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:28.027 [2024-12-06 18:41:22.567740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2310461 ] 00:29:28.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.027 Zero copy mechanism will not be used. 00:29:28.027 [2024-12-06 18:41:22.647419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.028 [2024-12-06 18:41:22.676745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.597 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.597 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:28.597 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.597 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.858 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.430 nvme0n1 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:29.430 18:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:29.430 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.430 Zero copy mechanism will not be used. 00:29:29.430 Running I/O for 2 seconds... 00:29:29.430 [2024-12-06 18:41:24.031965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.032000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.038068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.038092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.038100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.045952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.045972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.045986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.056228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.056248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.056255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.064375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.064401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.075474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.075494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.075500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.083520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.083546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.094952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.094972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.094979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.103498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.103518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.103526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.113359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.113379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.113386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.124820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.124840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.124846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.132105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.132134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.143515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.143534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.143540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.153589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.153608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.153615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.162199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.162218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.162224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.171892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.171911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.171918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.181532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.181552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.181558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.192881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.192900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.192906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.430 [2024-12-06 18:41:24.202929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.430 [2024-12-06 18:41:24.202948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.430 [2024-12-06 18:41:24.202955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.214480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.214499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.214506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.226622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.226645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.237911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.237930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.237936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.244673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.244691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.244698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.255407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.255425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.255431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.265829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.265848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.265855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.276606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.276625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.276631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.287696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.287713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.287720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.297345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.297364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.297371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.304469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.304488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.304498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.315074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.315092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.315099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.325162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.325181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.325187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.334766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.334785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.334791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.343594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.343613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.343619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.352312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.352330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.352337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.362046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.362065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.362071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.371368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.371386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.371393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.379019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.379037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.379044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.387761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.387784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.387791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.395377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.395396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.395403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.405655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.405673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.405680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.414364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.414383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.414389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.423927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.423946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.423952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.435557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.693 [2024-12-06 18:41:24.435576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.693 [2024-12-06 18:41:24.435583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.693 [2024-12-06 18:41:24.445235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.694 [2024-12-06 18:41:24.445253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.694 [2024-12-06 18:41:24.445260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.694 [2024-12-06 18:41:24.456144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.694 [2024-12-06 18:41:24.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.694 [2024-12-06 18:41:24.456169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.694 [2024-12-06 18:41:24.461796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.694 [2024-12-06 18:41:24.461814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.694 [2024-12-06 18:41:24.461820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.694 [2024-12-06 18:41:24.471151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.694 [2024-12-06 18:41:24.471170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.694 [2024-12-06 18:41:24.471177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.481235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.481254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.481260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.489347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.489366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.489372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.501226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.501244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.501251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.513466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.513485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.513491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.525942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.525961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.525968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.538150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.538168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.550928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.550953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.563485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.563504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.563513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.955 [2024-12-06 18:41:24.574845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.955 [2024-12-06 18:41:24.574865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.955 [2024-12-06 18:41:24.574872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.586992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.587011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.587017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.599238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.599256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.599263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.612231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.612249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.612256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.623460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.623478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.623485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.634901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.634919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.634926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.646804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.646823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.646829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.659289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.659314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.671175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.671197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.671203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.683643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.683662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.683668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.695610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.695629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.695635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.708430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.708449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.708456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.719057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.719075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.719082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:29.956 [2024-12-06 18:41:24.728850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:29.956 [2024-12-06 18:41:24.728869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.956 [2024-12-06 18:41:24.728876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.218 [2024-12-06 18:41:24.740579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.218 [2024-12-06 18:41:24.740598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.218 [2024-12-06 18:41:24.740605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.753071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.753089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.753096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.764123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.764141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.764148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.776002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.776021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.776027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.787683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.787702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.787708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.800100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.800119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.800125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.812717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.812735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.812742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.824578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.824597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.824603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.837323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.837342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.837348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.847200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.847219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.847225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.859516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.859534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.859540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.871573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.871592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.871601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.884186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.884205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.884212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.896626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.896650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.896656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.908442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.908461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.908468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.920899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.920918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.920924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.933793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.933812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.933818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.946539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.946556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.946563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.957538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.957557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.957564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.967008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.967026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.967032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.976060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.976078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.976085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.986796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.986814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.986821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.219 [2024-12-06 18:41:24.998574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.219 [2024-12-06 18:41:24.998593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.219 [2024-12-06 18:41:24.998599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.483 [2024-12-06 18:41:25.010740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.483 [2024-12-06 18:41:25.010759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.483 [2024-12-06 18:41:25.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.483 [2024-12-06 18:41:25.021175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.483 [2024-12-06 18:41:25.021194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.483 [2024-12-06 18:41:25.021200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.483 2885.00 IOPS, 360.62 MiB/s [2024-12-06T17:41:25.267Z] [2024-12-06 18:41:25.034267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.483 [2024-12-06 18:41:25.034286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.034292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.043542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.043560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.043567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.052812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.052830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.052836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.063447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.063466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.063476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.074555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.074573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.074579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.081645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.081664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.081670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.091949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.091968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.091974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.101765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.101783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.101789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.113156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.113175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.113182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.124047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.124066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.124072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.134353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.134372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.134378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.143424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.143442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.143449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.154791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.154813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.154819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.163722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.163740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.163746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.172732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.172751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.172757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.179933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.179951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.179957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.190915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.190933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.190940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.200167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.200186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.200193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.210507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.210525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.210531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.222947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.222966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.222972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.233151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.233170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.233178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.241171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.241190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.241196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.251739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.251757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.251764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.484 [2024-12-06 18:41:25.259630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.484 [2024-12-06 18:41:25.259653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.484 [2024-12-06 18:41:25.259660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.267811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.267830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.267836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.277435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.277454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.277461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.288594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.288613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.288620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.298420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.298443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.298450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.309605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.309623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.309630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.318068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.318086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.318096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.329178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.329197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.329203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.337556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.337575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.337581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.342562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.342580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.342587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.351160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.351178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.351184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.359150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.359168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.359175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.363913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.363938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.370825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.370850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.377207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.377225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.377232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.387359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.387385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.387392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.399007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.399026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.399033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.411417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.411436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.411443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.423884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.423903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.423909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.435068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.435093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.442928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.442947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.442953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.453344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.453369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.462527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.462545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.462552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.470459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.470478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.470488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.481656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.481675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.481681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.492704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.492723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.492729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.502735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.502754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.502760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.514154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.514173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.514179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.746 [2024-12-06 18:41:25.523912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:30.746 [2024-12-06 18:41:25.523931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.746 [2024-12-06 18:41:25.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.006 [2024-12-06 18:41:25.533907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.006 [2024-12-06 18:41:25.533925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.006 [2024-12-06 18:41:25.533932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.006 [2024-12-06 18:41:25.545128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.006 [2024-12-06 18:41:25.545147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.545153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.556666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.556685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.556692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.568397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.568419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.568426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.578283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.578302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.578308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.589309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.589327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.589334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.599060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.599079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.599086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.609657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.609676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.609682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.615746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.615765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.615772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.625080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.625099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.625105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.636161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.636180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.636186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.643655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.643674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.643680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.651145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.651164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.651170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.661346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.661365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.661371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.668179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.668198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.668205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.677118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.677137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.677143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.686543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.686562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.686568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.695415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.695434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.695441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.705712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.705731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.705737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.713152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.713171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.713178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.721600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.721619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.721628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.729075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.729101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.738648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.738667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.738673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.747891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.747910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.007 [2024-12-06 18:41:25.747917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.007 [2024-12-06 18:41:25.757654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.007 [2024-12-06 18:41:25.757673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.008 [2024-12-06 18:41:25.757679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.008 [2024-12-06 18:41:25.769633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.008 [2024-12-06 18:41:25.769659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.008 [2024-12-06 18:41:25.769665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.008 [2024-12-06 18:41:25.778695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.008 [2024-12-06 18:41:25.778713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.008 [2024-12-06 18:41:25.778720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.008 [2024-12-06 18:41:25.787112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.008 [2024-12-06 18:41:25.787130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.008 [2024-12-06 18:41:25.787136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.793812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.793840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.802018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.802037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.802044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.813048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.813067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.813073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.822786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.822805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.822811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.833259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.833277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.833284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.840249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.840268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.840274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.847913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.847931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.847938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.858503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.858522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.858528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.869309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.869328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.869334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.881920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.881939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.881949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.893533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.893551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.893559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.903629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.903652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.903659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.911104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.911123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.911130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.917327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.917353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.921926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.921945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.921951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.931069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.931088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.931094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.940445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.940465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.940471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.950125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.950144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.950151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.961228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.961250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.961256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.971738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.971757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.971763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.268 [2024-12-06 18:41:25.980839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.268 [2024-12-06 18:41:25.980857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.268 [2024-12-06 18:41:25.980864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:25.986904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:25.986923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:25.986930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:25.997005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:25.997024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:25.997031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:26.005983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:26.006002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:26.006008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:26.011771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:26.011789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:26.011795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:26.019055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:26.019074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:26.019080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.269 [2024-12-06 18:41:26.025813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:26.025831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:26.025838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.269 3110.50 IOPS, 388.81 MiB/s [2024-12-06T17:41:26.053Z] [2024-12-06 18:41:26.034621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22698c0) 00:29:31.269 [2024-12-06 18:41:26.034644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.269 [2024-12-06 18:41:26.034651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.269 00:29:31.269 Latency(us) 00:29:31.269 [2024-12-06T17:41:26.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.269 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:31.269 nvme0n1 : 2.01 3108.60 388.57 0.00 0.00 5142.12 778.24 13161.81 00:29:31.269 [2024-12-06T17:41:26.053Z] =================================================================================================================== 00:29:31.269 [2024-12-06T17:41:26.053Z] Total : 3108.60 388.57 0.00 0.00 5142.12 778.24 13161.81 00:29:31.269 { 00:29:31.269 "results": [ 00:29:31.269 { 00:29:31.269 "job": "nvme0n1", 00:29:31.269 "core_mask": "0x2", 00:29:31.269 "workload": "randread", 00:29:31.269 "status": "finished", 00:29:31.269 "queue_depth": 16, 00:29:31.269 "io_size": 131072, 00:29:31.269 "runtime": 2.00637, 00:29:31.269 "iops": 3108.5991118288252, 00:29:31.269 "mibps": 388.57488897860316, 00:29:31.269 "io_failed": 0, 00:29:31.269 "io_timeout": 0, 00:29:31.269 "avg_latency_us": 5142.119899524344, 00:29:31.269 "min_latency_us": 778.24, 00:29:31.269 "max_latency_us": 13161.813333333334 00:29:31.269 } 00:29:31.269 ], 00:29:31.269 "core_count": 1 00:29:31.269 } 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:31.531 | .driver_specific 00:29:31.531 | .nvme_error 00:29:31.531 | .status_code 00:29:31.531 | .command_transient_transport_error' 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 202 > 0 )) 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2310461 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2310461 ']' 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2310461 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.531 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2310461 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2310461' 00:29:31.792 killing process with pid 2310461 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2310461 00:29:31.792 Received shutdown signal, test time was about 2.000000 seconds 00:29:31.792 00:29:31.792 Latency(us) 00:29:31.792 [2024-12-06T17:41:26.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.792 [2024-12-06T17:41:26.576Z] =================================================================================================================== 00:29:31.792 [2024-12-06T17:41:26.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2310461 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2311144 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2311144 /var/tmp/bperf.sock 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2311144 ']' 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.792 18:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.792 [2024-12-06 18:41:26.469711] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:31.792 [2024-12-06 18:41:26.469767] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311144 ] 00:29:31.792 [2024-12-06 18:41:26.551043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.052 [2024-12-06 18:41:26.579865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.696 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.987 nvme0n1 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.987 18:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.292 Running I/O for 2 seconds... 00:29:33.292 [2024-12-06 18:41:27.798112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:33.292 [2024-12-06 18:41:27.799046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.799073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.806949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5ec8 00:29:33.292 [2024-12-06 18:41:27.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.807864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.815457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0ff8 00:29:33.292 [2024-12-06 18:41:27.816359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.816376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.823970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efd640 00:29:33.292 [2024-12-06 18:41:27.824876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.824892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.832488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5ec8 00:29:33.292 [2024-12-06 18:41:27.833411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.833427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.840984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0ff8 00:29:33.292 [2024-12-06 18:41:27.841891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.841908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.849468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efd640 00:29:33.292 [2024-12-06 18:41:27.850373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.850389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.857272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef6cc8 00:29:33.292 [2024-12-06 18:41:27.858072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.858089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.866311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee4140 00:29:33.292 [2024-12-06 18:41:27.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.867249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.875071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efd208 00:29:33.292 [2024-12-06 18:41:27.875968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.875985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.883530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef35f0 00:29:33.292 [2024-12-06 18:41:27.884430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.884447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.891986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef9f68 00:29:33.292 [2024-12-06 18:41:27.892856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.892873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.900450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef6cc8 00:29:33.292 [2024-12-06 18:41:27.901355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.901372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.908899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee23b8 00:29:33.292 [2024-12-06 18:41:27.909800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.292 [2024-12-06 18:41:27.909817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.292 [2024-12-06 18:41:27.917344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eff3c8 00:29:33.292 [2024-12-06 18:41:27.918238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.918254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.926905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:33.293 [2024-12-06 18:41:27.928244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.928263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.934812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee49b0 00:29:33.293 [2024-12-06 18:41:27.935865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.943175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee2c28 00:29:33.293 [2024-12-06 18:41:27.944226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.944242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.951629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee1b48 00:29:33.293 [2024-12-06 18:41:27.952664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.952680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.960092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee9168 00:29:33.293 [2024-12-06 18:41:27.961126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.961142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.968548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee8088 00:29:33.293 [2024-12-06 18:41:27.969591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.969607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.977001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6fa8 00:29:33.293 [2024-12-06 18:41:27.978026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.978042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.985466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5ec8 00:29:33.293 [2024-12-06 18:41:27.986506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.986523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:27.993948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ede8a8 00:29:33.293 [2024-12-06 18:41:27.994939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:27.994955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:28.002395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf988 00:29:33.293 [2024-12-06 18:41:28.003451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:28.003467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:28.010849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee0a68 00:29:33.293 [2024-12-06 18:41:28.011844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:28.011860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:28.019287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee4578 00:29:33.293 [2024-12-06 18:41:28.020316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:28.020333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:28.027747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeea00 00:29:33.293 [2024-12-06 18:41:28.028769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:28.028786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.293 [2024-12-06 18:41:28.036202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eed920 00:29:33.293 [2024-12-06 18:41:28.037213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.293 [2024-12-06 18:41:28.037230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.044686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eec840 00:29:33.556 [2024-12-06 18:41:28.045701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.045718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.053121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef92c0 00:29:33.556 [2024-12-06 18:41:28.054153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.061569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7100 00:29:33.556 [2024-12-06 18:41:28.062616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.062632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.070036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee0ea0 00:29:33.556 [2024-12-06 18:41:28.071069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.071085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.078487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef6020 00:29:33.556 [2024-12-06 18:41:28.079475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.086958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee1f80 00:29:33.556 [2024-12-06 18:41:28.087992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.088008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.095419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee4140 00:29:33.556 [2024-12-06 18:41:28.096418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.096434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.103885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee8d30 00:29:33.556 [2024-12-06 18:41:28.105102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.105118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.112512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee7c50 00:29:33.556 [2024-12-06 18:41:28.113560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.113576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.120979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6b70 00:29:33.556 [2024-12-06 18:41:28.122008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.122024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.129422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eddc00 00:29:33.556 [2024-12-06 18:41:28.130458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.130474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.137882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edece0 00:29:33.556 [2024-12-06 18:41:28.138935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.138951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.146336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edfdc0 00:29:33.556 [2024-12-06 18:41:28.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.147386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.154818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efbcf0 00:29:33.556 [2024-12-06 18:41:28.155865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.155881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.163262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5658 00:29:33.556 [2024-12-06 18:41:28.164309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.164325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.171718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eee5c8 00:29:33.556 [2024-12-06 18:41:28.172723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.172740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.180178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eed4e8 00:29:33.556 [2024-12-06 18:41:28.181200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.181216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.188617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eec408 00:29:33.556 [2024-12-06 18:41:28.189660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.556 [2024-12-06 18:41:28.189675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.556 [2024-12-06 18:41:28.197058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef6458 00:29:33.556 [2024-12-06 18:41:28.198086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.198102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.205507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7538 00:29:33.557 [2024-12-06 18:41:28.206538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.213984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee49b0 00:29:33.557 [2024-12-06 18:41:28.215011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.215028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.222502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee2c28 00:29:33.557 [2024-12-06 18:41:28.223553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.223569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.230978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee1b48 00:29:33.557 [2024-12-06 18:41:28.232016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.232033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.240518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee9168 00:29:33.557 [2024-12-06 18:41:28.242019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.242035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.246524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efbcf0 00:29:33.557 [2024-12-06 18:41:28.247209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.247225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.255128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5658 00:29:33.557 [2024-12-06 18:41:28.255822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.255839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.263583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef4f40 00:29:33.557 [2024-12-06 18:41:28.264272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.264288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.272039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc998 00:29:33.557 [2024-12-06 18:41:28.272703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.272720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.280489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef3e60 00:29:33.557 [2024-12-06 18:41:28.281138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.281155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.288941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efac10 00:29:33.557 [2024-12-06 18:41:28.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.289664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.297389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef8a50 00:29:33.557 [2024-12-06 18:41:28.298046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.298062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.305842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef9b30 00:29:33.557 [2024-12-06 18:41:28.306547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.306564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.314321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea248 00:29:33.557 [2024-12-06 18:41:28.315026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.315042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.322803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeb328 00:29:33.557 [2024-12-06 18:41:28.323495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.323511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.557 [2024-12-06 18:41:28.331241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0788 00:29:33.557 [2024-12-06 18:41:28.331950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.557 [2024-12-06 18:41:28.331967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.818 [2024-12-06 18:41:28.339690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef1868 00:29:33.818 [2024-12-06 18:41:28.340375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.340392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.348152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2948 00:29:33.819 [2024-12-06 18:41:28.348840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.348857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.356628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5ec8 00:29:33.819 [2024-12-06 18:41:28.357284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.365084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ede8a8 00:29:33.819 [2024-12-06 18:41:28.365786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.365807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.373521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf988 00:29:33.819 [2024-12-06 18:41:28.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.374230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.381967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee0a68 00:29:33.819 [2024-12-06 18:41:28.382660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.382677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.390414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee4578 00:29:33.819 [2024-12-06 18:41:28.391124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.391140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.398884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efe720 00:29:33.819 [2024-12-06 18:41:28.399586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.399602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.407345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:33.819 [2024-12-06 18:41:28.408055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.408072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.415811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef31b8 00:29:33.819 [2024-12-06 18:41:28.416517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.416533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.424244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef4298 00:29:33.819 [2024-12-06 18:41:28.424950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.432691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7da8 00:29:33.819 [2024-12-06 18:41:28.433369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.433385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.441148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeb760 00:29:33.819 [2024-12-06 18:41:28.441863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.441879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.449600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef9f68 00:29:33.819 [2024-12-06 18:41:28.450282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.450299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.458049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:33.819 [2024-12-06 18:41:28.458713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.458730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.466567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:33.819 [2024-12-06 18:41:28.467268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.467285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.475016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0bc0 00:29:33.819 [2024-12-06 18:41:28.475705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.475722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.483467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef1ca0 00:29:33.819 [2024-12-06 18:41:28.484164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.484180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.491925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2d80 00:29:33.819 [2024-12-06 18:41:28.492629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.492648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.500378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eddc00 00:29:33.819 [2024-12-06 18:41:28.501067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.501084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.508813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edece0 00:29:33.819 [2024-12-06 18:41:28.509505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.509521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.517256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edfdc0 00:29:33.819 [2024-12-06 18:41:28.517954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.517971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.525704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efbcf0 00:29:33.819 [2024-12-06 18:41:28.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.526385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.534158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5658 00:29:33.819 [2024-12-06 18:41:28.534840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.534856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.542631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef4f40 00:29:33.819 [2024-12-06 18:41:28.543323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.543340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.551081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc998 00:29:33.819 [2024-12-06 18:41:28.551773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.819 [2024-12-06 18:41:28.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.819 [2024-12-06 18:41:28.559516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef3e60 00:29:33.819 [2024-12-06 18:41:28.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.820 [2024-12-06 18:41:28.560225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.820 [2024-12-06 18:41:28.567998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efac10 00:29:33.820 [2024-12-06 18:41:28.568702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.820 [2024-12-06 18:41:28.568719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.820 [2024-12-06 18:41:28.576443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef8a50 00:29:33.820 [2024-12-06 18:41:28.577090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.820 [2024-12-06 18:41:28.577106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.820 [2024-12-06 18:41:28.584891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef9b30 00:29:33.820 [2024-12-06 18:41:28.585580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.820 [2024-12-06 18:41:28.585600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:33.820 [2024-12-06 18:41:28.593342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea248 00:29:33.820 [2024-12-06 18:41:28.594030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.820 [2024-12-06 18:41:28.594046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.601810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeb328 00:29:34.083 [2024-12-06 18:41:28.602494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.602510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.610265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0788 00:29:34.083 [2024-12-06 18:41:28.610956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.610973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.618709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef1868 00:29:34.083 [2024-12-06 18:41:28.619397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.619414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.627163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2948 00:29:34.083 [2024-12-06 18:41:28.627860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.627876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.635607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee5ec8 00:29:34.083 [2024-12-06 18:41:28.636302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.636319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.644062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ede8a8 00:29:34.083 [2024-12-06 18:41:28.644749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.644765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.652499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf988 00:29:34.083 [2024-12-06 18:41:28.653151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.653167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.660943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee0a68 00:29:34.083 [2024-12-06 18:41:28.661624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.661646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.669386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee4578 00:29:34.083 [2024-12-06 18:41:28.670072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.670089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.677841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efe720 00:29:34.083 [2024-12-06 18:41:28.678529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.678545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.686296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:34.083 [2024-12-06 18:41:28.686985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.687001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.694756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef31b8 00:29:34.083 [2024-12-06 18:41:28.695418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.695434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.703191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef4298 00:29:34.083 [2024-12-06 18:41:28.703846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.703863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.711622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7da8 00:29:34.083 [2024-12-06 18:41:28.712331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.712347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.720080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeb760 00:29:34.083 [2024-12-06 18:41:28.720765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.720782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.728543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef9f68 00:29:34.083 [2024-12-06 18:41:28.729231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.729248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.737005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:34.083 [2024-12-06 18:41:28.737689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.737706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.083 [2024-12-06 18:41:28.745460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:34.083 [2024-12-06 18:41:28.746153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.083 [2024-12-06 18:41:28.746170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.753927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef0bc0 00:29:34.084 [2024-12-06 18:41:28.754573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.762377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef1ca0 00:29:34.084 [2024-12-06 18:41:28.763065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.763082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.770852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2d80 00:29:34.084 [2024-12-06 18:41:28.771543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.771560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.779324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eddc00 00:29:34.084 [2024-12-06 18:41:28.780017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.780034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.787795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edece0 00:29:34.084 [2024-12-06 18:41:28.788484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.788500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 30088.00 IOPS, 117.53 MiB/s [2024-12-06T17:41:28.868Z] [2024-12-06 18:41:28.796221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.796911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.796928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.804673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.805357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.805377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.813133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.813832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.813849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.821660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.822246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.822263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.830119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.830775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.830791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.838565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.839205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.839222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.847012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.847652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.847669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.855452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.084 [2024-12-06 18:41:28.856153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.084 [2024-12-06 18:41:28.856170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.084 [2024-12-06 18:41:28.863920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.346 [2024-12-06 18:41:28.864611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.346 [2024-12-06 18:41:28.864628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.346 [2024-12-06 18:41:28.872399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.346 [2024-12-06 18:41:28.873103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.346 [2024-12-06 18:41:28.873120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.880850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.881541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.881558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.889323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.890009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.890026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.897775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.898449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.898465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.906243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.906907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.906924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.914716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.915413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.915430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.923166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.923851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.923868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.931635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.932332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.940072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.940759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.940776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.948539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.949244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.957003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.957704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.957721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.965466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.966166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.966183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.973924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.974620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.974641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.982367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.983046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.983063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.990833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.991530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.991547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:28.999286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:28.999982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:28.999999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:29.007758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:29.008454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:29.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:29.016203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:29.016880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:29.016898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:29.024650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:29.025331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:29.025350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:29.033091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:29.033689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:29.033706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.347 [2024-12-06 18:41:29.041541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.347 [2024-12-06 18:41:29.042227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.347 [2024-12-06 18:41:29.042244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.049998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.050676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.050693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.058503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.059185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.059201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.066941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.067634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.067654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.075421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.076104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.076121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.083866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.084573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.084589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.092328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.093014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.093031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.100804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.101480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.101499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.109398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.110110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.110126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.117847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.118550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.348 [2024-12-06 18:41:29.126297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.348 [2024-12-06 18:41:29.126940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.348 [2024-12-06 18:41:29.126957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.134755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.135449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.135465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.143221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.143919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.143935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.151666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.152370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.152387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.160105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.160777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.160794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.168571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.169259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.169275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.177015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.177716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.177733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.185475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.186178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.186195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.193943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.194648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.194665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.202399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.203099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.203115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.210856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.211517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.211534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.219291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.219969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.219986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.227754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.228430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.228447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.236218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.236918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.236935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.244681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.245378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.245394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.253121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.253795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.253812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.261559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.262250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.262267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.270020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.270713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.270730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.278482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.279180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.279197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.286943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.287616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.287633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.610 [2024-12-06 18:41:29.295395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.610 [2024-12-06 18:41:29.296077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.610 [2024-12-06 18:41:29.296095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.303842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.304500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.304517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.312294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.312939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.312956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.320755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.321398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.321418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.329232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.329916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.329933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.337693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.338378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.338395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.346150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.346817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.346835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.354596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.355280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.355297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.363075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016edf550 00:29:34.611 [2024-12-06 18:41:29.363767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.363784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.371808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eeea00 00:29:34.611 [2024-12-06 18:41:29.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.372241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.380524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef8e88 00:29:34.611 [2024-12-06 18:41:29.381294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.381310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:34.611 [2024-12-06 18:41:29.388884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eecc78 00:29:34.611 [2024-12-06 18:41:29.389683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.611 [2024-12-06 18:41:29.389699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:34.872 [2024-12-06 18:41:29.398439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eedd58 00:29:34.872 [2024-12-06 18:41:29.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.872 [2024-12-06 18:41:29.399717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:34.872 [2024-12-06 18:41:29.406361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee1710 00:29:34.872 [2024-12-06 18:41:29.407294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.872 [2024-12-06 18:41:29.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.872 [2024-12-06 18:41:29.415029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee73e0 00:29:34.872 [2024-12-06 18:41:29.415726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.872 [2024-12-06 18:41:29.415743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:34.872 [2024-12-06 18:41:29.423931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee7c50 00:29:34.872 [2024-12-06 18:41:29.425079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.872 [2024-12-06 18:41:29.425095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:34.872 [2024-12-06 18:41:29.430995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efb480 00:29:34.872 [2024-12-06 18:41:29.431721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.431738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.439340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee3498 00:29:34.873 [2024-12-06 18:41:29.440000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.440016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.447780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efdeb0 00:29:34.873 [2024-12-06 18:41:29.448453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.448469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.456231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eebb98 00:29:34.873 [2024-12-06 18:41:29.456790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.456806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.464810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2510 00:29:34.873 [2024-12-06 18:41:29.465435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.465451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.473499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:34.873 [2024-12-06 18:41:29.474135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.474151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.481975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:34.873 [2024-12-06 18:41:29.482602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.490446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2510 00:29:34.873 [2024-12-06 18:41:29.491118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.491134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.498907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:34.873 [2024-12-06 18:41:29.499579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.499594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.507375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:34.873 [2024-12-06 18:41:29.508043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.508060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.515851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef2510 00:29:34.873 [2024-12-06 18:41:29.516524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.516540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.524346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:34.873 [2024-12-06 18:41:29.525031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.532795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:34.873 [2024-12-06 18:41:29.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.533485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.541369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eebfd0 00:29:34.873 [2024-12-06 18:41:29.541989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.542008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.549825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7970 00:29:34.873 [2024-12-06 18:41:29.550505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.550521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.558304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6b70 00:29:34.873 [2024-12-06 18:41:29.558962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.558978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.566863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:34.873 [2024-12-06 18:41:29.567539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.567556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.575335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc560 00:29:34.873 [2024-12-06 18:41:29.576015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.576032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.583784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:34.873 [2024-12-06 18:41:29.584462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.584478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.592227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:34.873 [2024-12-06 18:41:29.592910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.592926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.600682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:34.873 [2024-12-06 18:41:29.601356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.601372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.873 [2024-12-06 18:41:29.609144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eebfd0 00:29:34.873 [2024-12-06 18:41:29.609779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.873 [2024-12-06 18:41:29.609795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.874 [2024-12-06 18:41:29.617592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7970 00:29:34.874 [2024-12-06 18:41:29.618265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.874 [2024-12-06 18:41:29.618282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.874 [2024-12-06 18:41:29.626036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6b70 00:29:34.874 [2024-12-06 18:41:29.626713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.874 [2024-12-06 18:41:29.626729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.874 [2024-12-06 18:41:29.634521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:34.874 [2024-12-06 18:41:29.635198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.874 [2024-12-06 18:41:29.635214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.874 [2024-12-06 18:41:29.642983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc560 00:29:34.874 [2024-12-06 18:41:29.643641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.874 [2024-12-06 18:41:29.643658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:34.874 [2024-12-06 18:41:29.651445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:34.874 [2024-12-06 18:41:29.652120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.874 [2024-12-06 18:41:29.652136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.659899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:35.136 [2024-12-06 18:41:29.660571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.660587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.668351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:35.136 [2024-12-06 18:41:29.668981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.668998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.676784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eebfd0 00:29:35.136 [2024-12-06 18:41:29.677445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.677461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.685233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7970 00:29:35.136 [2024-12-06 18:41:29.685904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.685920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.693695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6b70 00:29:35.136 [2024-12-06 18:41:29.694370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.694386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.702151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:35.136 [2024-12-06 18:41:29.702781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.702798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.710596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc560 00:29:35.136 [2024-12-06 18:41:29.711274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.711290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.719035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:35.136 [2024-12-06 18:41:29.719700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.719716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.727466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:35.136 [2024-12-06 18:41:29.728134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.728151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.735931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee38d0 00:29:35.136 [2024-12-06 18:41:29.736596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.736612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.744409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eebfd0 00:29:35.136 [2024-12-06 18:41:29.745074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.745091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.752872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef7970 00:29:35.136 [2024-12-06 18:41:29.753545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.753561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.761306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ee6b70 00:29:35.136 [2024-12-06 18:41:29.761973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.136 [2024-12-06 18:41:29.761992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.136 [2024-12-06 18:41:29.769755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eea680 00:29:35.137 [2024-12-06 18:41:29.770427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.137 [2024-12-06 18:41:29.770443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.137 [2024-12-06 18:41:29.778204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016efc560 00:29:35.137 [2024-12-06 18:41:29.778858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.137 [2024-12-06 18:41:29.778875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.137 [2024-12-06 18:41:29.786673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016ef5378 00:29:35.137 [2024-12-06 18:41:29.787333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.137 [2024-12-06 18:41:29.787349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.137 30152.00 IOPS, 117.78 MiB/s [2024-12-06T17:41:29.921Z] [2024-12-06 18:41:29.795110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cbfeb0) with pdu=0x200016eefae0 00:29:35.137 [2024-12-06 18:41:29.795786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.137 [2024-12-06 18:41:29.795803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:35.137 00:29:35.137 Latency(us) 00:29:35.137 [2024-12-06T17:41:29.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.137 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.137 nvme0n1 : 2.01 30153.08 117.79 0.00 0.00 4239.36 1706.67 10158.08 00:29:35.137 [2024-12-06T17:41:29.921Z] =================================================================================================================== 00:29:35.137 [2024-12-06T17:41:29.921Z] Total : 30153.08 117.79 0.00 0.00 4239.36 1706.67 10158.08 00:29:35.137 { 00:29:35.137 "results": [ 00:29:35.137 { 00:29:35.137 "job": "nvme0n1", 00:29:35.137 "core_mask": "0x2", 00:29:35.137 "workload": "randwrite", 00:29:35.137 "status": "finished", 00:29:35.137 "queue_depth": 128, 00:29:35.137 "io_size": 4096, 00:29:35.137 "runtime": 2.006362, 00:29:35.137 "iops": 30153.083042840724, 00:29:35.137 "mibps": 117.78548063609658, 00:29:35.137 "io_failed": 0, 00:29:35.137 "io_timeout": 0, 00:29:35.137 "avg_latency_us": 4239.363610918267, 00:29:35.137 "min_latency_us": 1706.6666666666667, 00:29:35.137 "max_latency_us": 10158.08 00:29:35.137 } 00:29:35.137 ], 00:29:35.137 "core_count": 1 00:29:35.137 } 00:29:35.137 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:35.137 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:35.137 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:35.137 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:35.137 | .driver_specific 00:29:35.137 | .nvme_error 00:29:35.137 | .status_code 00:29:35.137 | .command_transient_transport_error' 00:29:35.398 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:29:35.398 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2311144 00:29:35.398 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2311144 ']' 00:29:35.398 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2311144 00:29:35.398 18:41:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311144 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311144' 00:29:35.398 killing process with pid 2311144 00:29:35.398 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2311144 00:29:35.398 Received shutdown signal, test time was about 2.000000 seconds 00:29:35.398 00:29:35.398 Latency(us) 00:29:35.398 [2024-12-06T17:41:30.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.399 [2024-12-06T17:41:30.183Z] =================================================================================================================== 00:29:35.399 [2024-12-06T17:41:30.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2311144 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2311835 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2311835 /var/tmp/bperf.sock 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2311835 ']' 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:35.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.399 18:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.659 [2024-12-06 18:41:30.217179] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:35.659 [2024-12-06 18:41:30.217231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2311835 ] 00:29:35.659 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.659 Zero copy mechanism will not be used. 00:29:35.659 [2024-12-06 18:41:30.302709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.659 [2024-12-06 18:41:30.330443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.231 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.231 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.498 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.758 nvme0n1 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.758 18:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:37.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.020 Zero copy mechanism will not be used. 00:29:37.020 Running I/O for 2 seconds... 00:29:37.020 [2024-12-06 18:41:31.555497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.555837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.555863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.566967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.567198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.567218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.577946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.578214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.578235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.588021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.588265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.588282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.599117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.599177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.599193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.609702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.609966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.609982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.620764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.621032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.621048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.632272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.632573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.643705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.643938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.643953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.655267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.655489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.655505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.666383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.666661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.666679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.677346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.677643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.677664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.688018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.688219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.688235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.699809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.700124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.700141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.709584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.709886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.720071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.720286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.720301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.729738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.730030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.730047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.738751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.738807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.738823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.746726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.746786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.746802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.755777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.755826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.755842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.763491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.763697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.763716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.773496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.773785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.773802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.783516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.783574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.783590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.020 [2024-12-06 18:41:31.793493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.020 [2024-12-06 18:41:31.793789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.020 [2024-12-06 18:41:31.793806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.803117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.803192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.803208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.812672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.812731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.812746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.823038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.823334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.823350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.831539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.831613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.831628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.840729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.840787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.840803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.850029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.850273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.850289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.858464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.858522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.858538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.867299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.867595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.867612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.876816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.876880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.876896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.884680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.884969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.891881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.892154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.892170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.901148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.901430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.901447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.908206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.908268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.908284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.912419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.912694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.912711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.918140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.282 [2024-12-06 18:41:31.918206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.282 [2024-12-06 18:41:31.918223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.282 [2024-12-06 18:41:31.928657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.928942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.928959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.937369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.937677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.937693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.946821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.946879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.946895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.954370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.954661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.954678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.961571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.961824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.961841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.970546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.970829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.970846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.978858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.978939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.978955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.987464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.987512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.987531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:31.992777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:31.993005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:31.993030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.003287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.003334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.003349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.014927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.015159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.015176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.026354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.026611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.026628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.037255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.037538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.037554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.045963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.046021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.053549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.053614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.053629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.283 [2024-12-06 18:41:32.062480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.283 [2024-12-06 18:41:32.062731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.283 [2024-12-06 18:41:32.062747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.072599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.072678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.082272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.082458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.082474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.090684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.090736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.090752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.099273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.099329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.099344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.107167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.107224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.107240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.113467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.113738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.113756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.123063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.123129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.123145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.131492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.131727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.131742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.140833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.140890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.544 [2024-12-06 18:41:32.140908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.544 [2024-12-06 18:41:32.151317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.544 [2024-12-06 18:41:32.151372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.151387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.161832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.162116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.162133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.173459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.173687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.173704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.184486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.184804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.184822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.192465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.192511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.192527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.201803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.201848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.201864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.209906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.210179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.217528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.217605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.217621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.224050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.224251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.224270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.232536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.232748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.232765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.242221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.242279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.242295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.250764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.250830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.250846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.259487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.259799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.259816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.267251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.267504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.267521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.272905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.273095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.273111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.278124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.278333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.278349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.282910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.283113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.283129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.288768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.288941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.288957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.294196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.294440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.294456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.302092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.302282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.302298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.306972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.307162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.307178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.316844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.317139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.317157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.545 [2024-12-06 18:41:32.325029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.545 [2024-12-06 18:41:32.325220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.545 [2024-12-06 18:41:32.325236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.333692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.333978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.342239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.342515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.342533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.348635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.348834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.348850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.353260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.353447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.353464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.360495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.360691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.360708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.807 [2024-12-06 18:41:32.368139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.807 [2024-12-06 18:41:32.368328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.807 [2024-12-06 18:41:32.368344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.373877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.374082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.381380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.381579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.381595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.388187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.388377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.388394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.397662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.397960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.397978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.407414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.407627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.407649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.416016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.416309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.416330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.421735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.422056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.422074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.427412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.427732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.427749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.435955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.436247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.436265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.443527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.443864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.443882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.451309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.451592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.451610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.459835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.460023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.460040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.467789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.468054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.468072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.474932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.475247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.475265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.483070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.483359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.483377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.491137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.491451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.491469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.499983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.500294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.500311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.509127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.509315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.509331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.518484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.518679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.518696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.526102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.526293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.526310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.533555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.533847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.542264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.542598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.542616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.549434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.549624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.549646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.808 3584.00 IOPS, 448.00 MiB/s [2024-12-06T17:41:32.592Z] [2024-12-06 18:41:32.557605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.557807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.557824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.563523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.563726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.563744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.572227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.572506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.808 [2024-12-06 18:41:32.572524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.808 [2024-12-06 18:41:32.582299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:37.808 [2024-12-06 18:41:32.582588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.809 [2024-12-06 18:41:32.582604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.592194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.070 [2024-12-06 18:41:32.592381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.070 [2024-12-06 18:41:32.592398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.599551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.070 [2024-12-06 18:41:32.599743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.070 [2024-12-06 18:41:32.599760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.607666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.070 [2024-12-06 18:41:32.608012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.070 [2024-12-06 18:41:32.608030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.613957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.070 [2024-12-06 18:41:32.614145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.070 [2024-12-06 18:41:32.614162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.623399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.070 [2024-12-06 18:41:32.623737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.070 [2024-12-06 18:41:32.623755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.070 [2024-12-06 18:41:32.631433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.631536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.631552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.640691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.640889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.640906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.649677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.649890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.649907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.659890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.660197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.660214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.671270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.671537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.671555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.682069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.682466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.682483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.693143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.693407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.693424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.704971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.705293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.705310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.716784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.717021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.717040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.726489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.726697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.726714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.737970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.738339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.748185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.748407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.748424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.758885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.759103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.770713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.770952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.770969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.780200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.780387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.780404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.787693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.787882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.787898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.795920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.796237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.796260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.803135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.803474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.803492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.813532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.813735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.813752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.822800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.823107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.823124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.832371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.832690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.832708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.840757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.840959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.840975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.071 [2024-12-06 18:41:32.848014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.071 [2024-12-06 18:41:32.848330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.071 [2024-12-06 18:41:32.848348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.854201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.854401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.854418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.863665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.863967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.863984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.875063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.875395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.875413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.886821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.887071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.887088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.898351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.898649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.898666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.910342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.910548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.910564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.922247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.922534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.922552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.333 [2024-12-06 18:41:32.934068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.333 [2024-12-06 18:41:32.934387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.333 [2024-12-06 18:41:32.934404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:32.946196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:32.946446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:32.946464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:32.957987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:32.958291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:32.958308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:32.969912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:32.970180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:32.970197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:32.981427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:32.981729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:32.981746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:32.992728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:32.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:32.992958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.004032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.004255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.004272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.015245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.015568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.015585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.024209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.024537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.035394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.035697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.035714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.045768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.046054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.046072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.056343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.056695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.056712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.067039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.067352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.067372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.078092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.078298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.078315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.089034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.089271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.089288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.099755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.099960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.099977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.334 [2024-12-06 18:41:33.111454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.334 [2024-12-06 18:41:33.111718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.334 [2024-12-06 18:41:33.111734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.123328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.123570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.123588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.135270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.135502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.135521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.146316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.146618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.157820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.158053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.158070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.168563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.168864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.168883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.179716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.179990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.180007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.191406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.191738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.191755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.202800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.203065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.595 [2024-12-06 18:41:33.203083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.595 [2024-12-06 18:41:33.214646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.595 [2024-12-06 18:41:33.214882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.214900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.225427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.225681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.225697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.236422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.236692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.236709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.247914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.248136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.248153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.259591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.259944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.259961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.271319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.271692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.271709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.282609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.283053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.283071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.294250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.294516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.294534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.305870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.306112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.306129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.317355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.317680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.317698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.328054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.328373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.328391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.338172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.338482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.338500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.349342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.349686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.349703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.357535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.357715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.357734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.363822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.364047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.371029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.371324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.371341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.596 [2024-12-06 18:41:33.377854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.596 [2024-12-06 18:41:33.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.596 [2024-12-06 18:41:33.378033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.857 [2024-12-06 18:41:33.382592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.857 [2024-12-06 18:41:33.382764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.857 [2024-12-06 18:41:33.382781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.857 [2024-12-06 18:41:33.385666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.857 [2024-12-06 18:41:33.385827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.385843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.389933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.390097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.390114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.394844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.395009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.395027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.400464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.400629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.400650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.404251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.404417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.404434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.412931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.413289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.413306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.417464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.417626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.417649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.423407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.423570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.423587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.426483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.426650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.426666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.434045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.434364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.434382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.442488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.442767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.442785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.450808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.451124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.451142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.454709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.454870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.454887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.459581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.459965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.459983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.466964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.467128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.475324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.475389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.475404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.482797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.482858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.482873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.489363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.489648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.489665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.495042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.495320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.495337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.502001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.502281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.502298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.506408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.506456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.506471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.513151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.513216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.513234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.517587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.517633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.517653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.524309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.524357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.524373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.531734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.531975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.531990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.539635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.539696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.539711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.858 [2024-12-06 18:41:33.543723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.858 [2024-12-06 18:41:33.543785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.858 [2024-12-06 18:41:33.543800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.859 [2024-12-06 18:41:33.551319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.859 [2024-12-06 18:41:33.551372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.859 [2024-12-06 18:41:33.551387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.859 [2024-12-06 18:41:33.555676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.859 [2024-12-06 18:41:33.555722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.859 [2024-12-06 18:41:33.555738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.859 3509.00 IOPS, 438.62 MiB/s [2024-12-06T17:41:33.643Z] [2024-12-06 18:41:33.562907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cc01f0) with pdu=0x200016eff3c8 00:29:38.859 [2024-12-06 18:41:33.563125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.859 [2024-12-06 18:41:33.563141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.859 00:29:38.859 Latency(us) 00:29:38.859 [2024-12-06T17:41:33.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.859 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:38.859 nvme0n1 : 2.01 3507.78 438.47 0.00 0.00 4551.83 1399.47 12178.77 00:29:38.859 [2024-12-06T17:41:33.643Z] =================================================================================================================== 00:29:38.859 [2024-12-06T17:41:33.643Z] Total : 3507.78 438.47 0.00 0.00 4551.83 1399.47 12178.77 00:29:38.859 { 00:29:38.859 "results": [ 00:29:38.859 { 00:29:38.859 "job": "nvme0n1", 00:29:38.859 "core_mask": "0x2", 00:29:38.859 "workload": "randwrite", 00:29:38.859 "status": "finished", 00:29:38.859 "queue_depth": 16, 00:29:38.859 "io_size": 131072, 00:29:38.859 "runtime": 2.006398, 00:29:38.859 "iops": 3507.7786162067546, 00:29:38.859 "mibps": 438.4723270258443, 00:29:38.859 "io_failed": 0, 00:29:38.859 "io_timeout": 0, 00:29:38.859 "avg_latency_us": 4551.825010893246, 00:29:38.859 "min_latency_us": 1399.4666666666667, 00:29:38.859 "max_latency_us": 12178.773333333333 00:29:38.859 } 00:29:38.859 ], 00:29:38.859 "core_count": 1 00:29:38.859 } 00:29:38.859 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.859 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.859 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.859 | .driver_specific 00:29:38.859 | .nvme_error 00:29:38.859 | .status_code 00:29:38.859 | .command_transient_transport_error' 00:29:38.859 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2311835 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2311835 ']' 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2311835 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2311835 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2311835' 00:29:39.119 killing process with pid 2311835 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2311835 00:29:39.119 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.119 00:29:39.119 Latency(us) 00:29:39.119 [2024-12-06T17:41:33.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.119 [2024-12-06T17:41:33.903Z] =================================================================================================================== 00:29:39.119 [2024-12-06T17:41:33.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.119 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2311835 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2309529 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2309529 ']' 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2309529 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.380 18:41:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2309529 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2309529' 00:29:39.380 killing process with pid 2309529 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2309529 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2309529 00:29:39.380 00:29:39.380 real 0m15.717s 00:29:39.380 user 0m31.852s 00:29:39.380 sys 0m3.412s 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.380 ************************************ 00:29:39.380 END TEST nvmf_digest_error 00:29:39.380 ************************************ 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.380 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.640 rmmod nvme_tcp 00:29:39.640 rmmod nvme_fabrics 00:29:39.640 rmmod nvme_keyring 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2309529 ']' 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2309529 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2309529 ']' 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2309529 00:29:39.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2309529) - No such process 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2309529 is not found' 00:29:39.640 Process with pid 2309529 is not found 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.640 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.641 18:41:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.549 18:41:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.549 00:29:41.549 real 0m42.549s 00:29:41.549 user 1m7.271s 00:29:41.549 sys 0m12.866s 00:29:41.549 18:41:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.549 18:41:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:41.549 ************************************ 00:29:41.549 END TEST nvmf_digest 00:29:41.549 ************************************ 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.809 ************************************ 00:29:41.809 START TEST nvmf_bdevperf 00:29:41.809 ************************************ 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:41.809 * Looking for test storage... 00:29:41.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:41.809 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.070 --rc genhtml_branch_coverage=1 00:29:42.070 --rc genhtml_function_coverage=1 00:29:42.070 --rc genhtml_legend=1 00:29:42.070 --rc geninfo_all_blocks=1 00:29:42.070 --rc geninfo_unexecuted_blocks=1 00:29:42.070 00:29:42.070 ' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.070 --rc genhtml_branch_coverage=1 00:29:42.070 --rc genhtml_function_coverage=1 00:29:42.070 --rc genhtml_legend=1 00:29:42.070 --rc geninfo_all_blocks=1 00:29:42.070 --rc geninfo_unexecuted_blocks=1 00:29:42.070 00:29:42.070 ' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.070 --rc genhtml_branch_coverage=1 00:29:42.070 --rc genhtml_function_coverage=1 00:29:42.070 --rc genhtml_legend=1 00:29:42.070 --rc geninfo_all_blocks=1 00:29:42.070 --rc geninfo_unexecuted_blocks=1 00:29:42.070 00:29:42.070 ' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.070 --rc genhtml_branch_coverage=1 00:29:42.070 --rc genhtml_function_coverage=1 00:29:42.070 --rc genhtml_legend=1 00:29:42.070 --rc geninfo_all_blocks=1 00:29:42.070 --rc geninfo_unexecuted_blocks=1 00:29:42.070 00:29:42.070 ' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:42.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.070 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.071 18:41:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:50.237 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.237 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:50.238 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:50.238 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:50.238 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:50.238 18:41:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:50.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:29:50.238 00:29:50.238 --- 10.0.0.2 ping statistics --- 00:29:50.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.238 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:29:50.238 00:29:50.238 --- 10.0.0.1 ping statistics --- 00:29:50.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.238 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2316848 00:29:50.238 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2316848 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2316848 ']' 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.239 18:41:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.239 [2024-12-06 18:41:44.212993] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:50.239 [2024-12-06 18:41:44.213062] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.239 [2024-12-06 18:41:44.312452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.239 [2024-12-06 18:41:44.364361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.239 [2024-12-06 18:41:44.364417] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.239 [2024-12-06 18:41:44.364426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:50.239 [2024-12-06 18:41:44.364433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:50.239 [2024-12-06 18:41:44.364439] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.239 [2024-12-06 18:41:44.366252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.239 [2024-12-06 18:41:44.366412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.239 [2024-12-06 18:41:44.366414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 [2024-12-06 18:41:45.095406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 Malloc0 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:50.500 [2024-12-06 18:41:45.167915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.500 { 00:29:50.500 "params": { 00:29:50.500 "name": "Nvme$subsystem", 00:29:50.500 "trtype": "$TEST_TRANSPORT", 00:29:50.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.500 "adrfam": "ipv4", 00:29:50.500 "trsvcid": "$NVMF_PORT", 00:29:50.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.500 "hdgst": ${hdgst:-false}, 00:29:50.500 "ddgst": ${ddgst:-false} 00:29:50.500 }, 00:29:50.500 "method": "bdev_nvme_attach_controller" 00:29:50.500 } 00:29:50.500 EOF 00:29:50.500 )") 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:50.500 18:41:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.500 "params": { 00:29:50.500 "name": "Nvme1", 00:29:50.500 "trtype": "tcp", 00:29:50.500 "traddr": "10.0.0.2", 00:29:50.500 "adrfam": "ipv4", 00:29:50.500 "trsvcid": "4420", 00:29:50.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.500 "hdgst": false, 00:29:50.500 "ddgst": false 00:29:50.500 }, 00:29:50.500 "method": "bdev_nvme_attach_controller" 00:29:50.500 }' 00:29:50.500 [2024-12-06 18:41:45.227624] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:50.500 [2024-12-06 18:41:45.227700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2316969 ] 00:29:50.759 [2024-12-06 18:41:45.318584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.759 [2024-12-06 18:41:45.371714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.019 Running I/O for 1 seconds... 00:29:51.958 8905.00 IOPS, 34.79 MiB/s 00:29:51.958 Latency(us) 00:29:51.958 [2024-12-06T17:41:46.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.958 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.958 Verification LBA range: start 0x0 length 0x4000 00:29:51.958 Nvme1n1 : 1.01 8947.54 34.95 0.00 0.00 14232.97 1105.92 12014.93 00:29:51.958 [2024-12-06T17:41:46.742Z] =================================================================================================================== 00:29:51.958 [2024-12-06T17:41:46.742Z] Total : 8947.54 34.95 0.00 0.00 14232.97 1105.92 12014.93 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2317220 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:51.958 { 00:29:51.958 "params": { 00:29:51.958 "name": "Nvme$subsystem", 00:29:51.958 "trtype": "$TEST_TRANSPORT", 00:29:51.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.958 "adrfam": "ipv4", 00:29:51.958 "trsvcid": "$NVMF_PORT", 00:29:51.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.958 "hdgst": ${hdgst:-false}, 00:29:51.958 "ddgst": ${ddgst:-false} 00:29:51.958 }, 00:29:51.958 "method": "bdev_nvme_attach_controller" 00:29:51.958 } 00:29:51.958 EOF 00:29:51.958 )") 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:51.958 18:41:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:51.958 "params": { 00:29:51.958 "name": "Nvme1", 00:29:51.958 "trtype": "tcp", 00:29:51.958 "traddr": "10.0.0.2", 00:29:51.958 "adrfam": "ipv4", 00:29:51.958 "trsvcid": "4420", 00:29:51.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.958 "hdgst": false, 00:29:51.958 "ddgst": false 00:29:51.958 }, 00:29:51.958 "method": "bdev_nvme_attach_controller" 00:29:51.958 }' 00:29:52.219 [2024-12-06 18:41:46.763928] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:52.219 [2024-12-06 18:41:46.763980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2317220 ] 00:29:52.219 [2024-12-06 18:41:46.853998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.219 [2024-12-06 18:41:46.889794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.479 Running I/O for 15 seconds... 00:29:54.364 11255.00 IOPS, 43.96 MiB/s [2024-12-06T17:41:49.720Z] 11362.50 IOPS, 44.38 MiB/s [2024-12-06T17:41:49.720Z] 18:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2316848 00:29:55.202 18:41:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:55.202 [2024-12-06 18:41:49.735542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.735982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.735991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.202 [2024-12-06 18:41:49.736190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.202 [2024-12-06 18:41:49.736198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:111624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.736990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.736997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.737015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.737033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.737050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.737068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.203 [2024-12-06 18:41:49.737087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.203 [2024-12-06 18:41:49.737097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.203 [2024-12-06 18:41:49.737104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.204 [2024-12-06 18:41:49.737783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.204 [2024-12-06 18:41:49.737792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.737986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.737996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.205 [2024-12-06 18:41:49.738119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.738128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15acea0 is same with the state(6) to be set 00:29:55.205 [2024-12-06 18:41:49.738138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:55.205 [2024-12-06 18:41:49.738144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:55.205 [2024-12-06 18:41:49.738150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112248 len:8 PRP1 0x0 PRP2 0x0 00:29:55.205 [2024-12-06 18:41:49.738162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.205 [2024-12-06 18:41:49.741772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.205 [2024-12-06 18:41:49.741828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.205 [2024-12-06 18:41:49.742509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.205 [2024-12-06 18:41:49.742528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.205 [2024-12-06 18:41:49.742536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.205 [2024-12-06 18:41:49.742765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.205 [2024-12-06 18:41:49.742985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.205 [2024-12-06 18:41:49.742994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.205 [2024-12-06 18:41:49.743003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.205 [2024-12-06 18:41:49.743012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.205 [2024-12-06 18:41:49.755926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.205 [2024-12-06 18:41:49.756459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.205 [2024-12-06 18:41:49.756478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.205 [2024-12-06 18:41:49.756487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.205 [2024-12-06 18:41:49.756714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.205 [2024-12-06 18:41:49.756934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.205 [2024-12-06 18:41:49.756943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.205 [2024-12-06 18:41:49.756951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.205 [2024-12-06 18:41:49.756958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.205 [2024-12-06 18:41:49.769869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.205 [2024-12-06 18:41:49.770521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.205 [2024-12-06 18:41:49.770563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.205 [2024-12-06 18:41:49.770575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.205 [2024-12-06 18:41:49.770825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.205 [2024-12-06 18:41:49.771049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.205 [2024-12-06 18:41:49.771059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.205 [2024-12-06 18:41:49.771068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.205 [2024-12-06 18:41:49.771076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.205 [2024-12-06 18:41:49.783755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.205 [2024-12-06 18:41:49.784385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.205 [2024-12-06 18:41:49.784428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.205 [2024-12-06 18:41:49.784439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.205 [2024-12-06 18:41:49.784688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.205 [2024-12-06 18:41:49.784911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.205 [2024-12-06 18:41:49.784921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.205 [2024-12-06 18:41:49.784929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.205 [2024-12-06 18:41:49.784937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.205 [2024-12-06 18:41:49.797710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.205 [2024-12-06 18:41:49.798358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.205 [2024-12-06 18:41:49.798403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.205 [2024-12-06 18:41:49.798418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.798670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.798895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.798906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.798914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.798922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.811603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.812276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.812321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.812332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.812573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.812807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.812819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.812827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.812835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.825527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.826168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.826215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.826231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.826473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.826708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.826720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.826728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.826736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.839429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.839889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.839912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.839920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.840140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.840359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.840368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.840376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.840383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.853276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.853860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.853882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.853891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.854110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.854330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.854340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.854347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.854355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.867246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.867815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.867824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.868042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.868280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.868291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.868298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.868305] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.881019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.881566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.881589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.881598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.881825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.882047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.882058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.882066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.882073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.894973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.895685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.895751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.895765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.896020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.896247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.896259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.896268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.206 [2024-12-06 18:41:49.896278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.206 [2024-12-06 18:41:49.908840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.206 [2024-12-06 18:41:49.909570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.206 [2024-12-06 18:41:49.909636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.206 [2024-12-06 18:41:49.909666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.206 [2024-12-06 18:41:49.909923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.206 [2024-12-06 18:41:49.910153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.206 [2024-12-06 18:41:49.910166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.206 [2024-12-06 18:41:49.910182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.910192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.207 [2024-12-06 18:41:49.922721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.207 [2024-12-06 18:41:49.923430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.207 [2024-12-06 18:41:49.923495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.207 [2024-12-06 18:41:49.923509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.207 [2024-12-06 18:41:49.923781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.207 [2024-12-06 18:41:49.924011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.207 [2024-12-06 18:41:49.924024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.207 [2024-12-06 18:41:49.924032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.924042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.207 [2024-12-06 18:41:49.936550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.207 [2024-12-06 18:41:49.937274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.207 [2024-12-06 18:41:49.937339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.207 [2024-12-06 18:41:49.937352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.207 [2024-12-06 18:41:49.937608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.207 [2024-12-06 18:41:49.937852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.207 [2024-12-06 18:41:49.937865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.207 [2024-12-06 18:41:49.937874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.937884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.207 [2024-12-06 18:41:49.950387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.207 [2024-12-06 18:41:49.951012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.207 [2024-12-06 18:41:49.951044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.207 [2024-12-06 18:41:49.951054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.207 [2024-12-06 18:41:49.951275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.207 [2024-12-06 18:41:49.951498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.207 [2024-12-06 18:41:49.951510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.207 [2024-12-06 18:41:49.951519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.951528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.207 [2024-12-06 18:41:49.964275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.207 [2024-12-06 18:41:49.964994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.207 [2024-12-06 18:41:49.965058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.207 [2024-12-06 18:41:49.965072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.207 [2024-12-06 18:41:49.965327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.207 [2024-12-06 18:41:49.965555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.207 [2024-12-06 18:41:49.965567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.207 [2024-12-06 18:41:49.965576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.965586] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.207 [2024-12-06 18:41:49.978171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.207 [2024-12-06 18:41:49.978917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.207 [2024-12-06 18:41:49.978982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.207 [2024-12-06 18:41:49.978996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.207 [2024-12-06 18:41:49.979252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.207 [2024-12-06 18:41:49.979479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.207 [2024-12-06 18:41:49.979491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.207 [2024-12-06 18:41:49.979501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.207 [2024-12-06 18:41:49.979511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 [2024-12-06 18:41:49.992038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:49.992730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:49.992778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:49.992790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:49.993031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:49.993258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:49.993270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:49.993278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.471 [2024-12-06 18:41:49.993287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 [2024-12-06 18:41:50.006492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:50.006978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:50.007008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:50.007026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:50.007250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:50.007472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:50.007483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:50.007492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.471 [2024-12-06 18:41:50.007501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 [2024-12-06 18:41:50.020643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:50.021360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:50.021421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:50.021434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:50.021701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:50.021929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:50.021941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:50.021949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.471 [2024-12-06 18:41:50.021959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 [2024-12-06 18:41:50.034486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:50.035109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:50.035137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:50.035146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:50.035367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:50.035589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:50.035601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:50.035608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.471 [2024-12-06 18:41:50.035616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 [2024-12-06 18:41:50.048407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:50.049082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:50.049143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:50.049156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:50.049408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:50.049655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:50.049668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:50.049677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.471 [2024-12-06 18:41:50.049686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.471 10096.00 IOPS, 39.44 MiB/s [2024-12-06T17:41:50.255Z] [2024-12-06 18:41:50.062392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.471 [2024-12-06 18:41:50.063146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-12-06 18:41:50.063208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.471 [2024-12-06 18:41:50.063221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.471 [2024-12-06 18:41:50.063473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.471 [2024-12-06 18:41:50.063712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.471 [2024-12-06 18:41:50.063725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.471 [2024-12-06 18:41:50.063734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.063743] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.076294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.077015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.077076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.077089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.077342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.077569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.077580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.077589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.077598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.090106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.090770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.090836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.090850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.091106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.091334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.091347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.091363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.091373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.104107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.104883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.104947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.104961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.105217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.105445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.105459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.105467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.105477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.117904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.118661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.118726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.118741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.118998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.119225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.119238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.119247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.119257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.131769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.132448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.132513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.132527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.132798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.133027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.133039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.133049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.133058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.145573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.146293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.146360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.146373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.146628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.146870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.146882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.146890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.146900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.159404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.160105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.160171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.160184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.160441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.160685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.472 [2024-12-06 18:41:50.160698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.472 [2024-12-06 18:41:50.160707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.472 [2024-12-06 18:41:50.160717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.472 [2024-12-06 18:41:50.173264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.472 [2024-12-06 18:41:50.173868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-12-06 18:41:50.173932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.472 [2024-12-06 18:41:50.173947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.472 [2024-12-06 18:41:50.174203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.472 [2024-12-06 18:41:50.174430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.174443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.174452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.174462] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.473 [2024-12-06 18:41:50.187209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.473 [2024-12-06 18:41:50.187940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-12-06 18:41:50.188006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.473 [2024-12-06 18:41:50.188027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.473 [2024-12-06 18:41:50.188283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.473 [2024-12-06 18:41:50.188510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.188523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.188531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.188541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.473 [2024-12-06 18:41:50.201059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.473 [2024-12-06 18:41:50.201784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-12-06 18:41:50.201849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.473 [2024-12-06 18:41:50.201862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.473 [2024-12-06 18:41:50.202118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.473 [2024-12-06 18:41:50.202346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.202358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.202366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.202376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.473 [2024-12-06 18:41:50.214900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.473 [2024-12-06 18:41:50.215618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-12-06 18:41:50.215694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.473 [2024-12-06 18:41:50.215708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.473 [2024-12-06 18:41:50.215964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.473 [2024-12-06 18:41:50.216192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.216205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.216214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.216224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.473 [2024-12-06 18:41:50.228780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.473 [2024-12-06 18:41:50.229515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-12-06 18:41:50.229582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.473 [2024-12-06 18:41:50.229595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.473 [2024-12-06 18:41:50.229865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.473 [2024-12-06 18:41:50.230102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.230114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.230123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.230134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.473 [2024-12-06 18:41:50.242680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.473 [2024-12-06 18:41:50.243308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-12-06 18:41:50.243339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.473 [2024-12-06 18:41:50.243349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.473 [2024-12-06 18:41:50.243579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.473 [2024-12-06 18:41:50.243816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.473 [2024-12-06 18:41:50.243828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.473 [2024-12-06 18:41:50.243836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.473 [2024-12-06 18:41:50.243844] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.738 [2024-12-06 18:41:50.256609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.738 [2024-12-06 18:41:50.257196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-06 18:41:50.257225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.738 [2024-12-06 18:41:50.257234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.738 [2024-12-06 18:41:50.257455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.738 [2024-12-06 18:41:50.257687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.738 [2024-12-06 18:41:50.257700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.738 [2024-12-06 18:41:50.257709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.738 [2024-12-06 18:41:50.257717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.738 [2024-12-06 18:41:50.270490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.738 [2024-12-06 18:41:50.271072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-06 18:41:50.271099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.738 [2024-12-06 18:41:50.271108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.738 [2024-12-06 18:41:50.271328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.738 [2024-12-06 18:41:50.271550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.738 [2024-12-06 18:41:50.271563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.738 [2024-12-06 18:41:50.271578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.738 [2024-12-06 18:41:50.271590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.738 [2024-12-06 18:41:50.284360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.738 [2024-12-06 18:41:50.285017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-06 18:41:50.285084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.738 [2024-12-06 18:41:50.285097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.738 [2024-12-06 18:41:50.285354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.738 [2024-12-06 18:41:50.285582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.738 [2024-12-06 18:41:50.285595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.738 [2024-12-06 18:41:50.285603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.738 [2024-12-06 18:41:50.285613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.738 [2024-12-06 18:41:50.298186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.738 [2024-12-06 18:41:50.298690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.738 [2024-12-06 18:41:50.298723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.738 [2024-12-06 18:41:50.298733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.738 [2024-12-06 18:41:50.298955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.738 [2024-12-06 18:41:50.299178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.738 [2024-12-06 18:41:50.299190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.299198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.299208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.312163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.312941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.313006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.313020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.313276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.313504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.313517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.313526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.313536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.326138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.326818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.326887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.326901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.327161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.327389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.327400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.327410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.327419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.339965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.340667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.340733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.340748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.341005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.341232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.341243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.341253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.341263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.353835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.354543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.354609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.354624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.354894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.355124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.355136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.355145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.355154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.367719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.368398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.368462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.368484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.368773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.369003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.369015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.369024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.369034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.381613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.382306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.382371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.382385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.382655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.382884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.382896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.382905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.382915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.395465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.395986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.739 [2024-12-06 18:41:50.396017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.739 [2024-12-06 18:41:50.396026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.739 [2024-12-06 18:41:50.396249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.739 [2024-12-06 18:41:50.396471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.739 [2024-12-06 18:41:50.396485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.739 [2024-12-06 18:41:50.396493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.739 [2024-12-06 18:41:50.396501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.739 [2024-12-06 18:41:50.409478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.739 [2024-12-06 18:41:50.410062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.410090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.410099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.410321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.410550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.410562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.410570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.410577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.423335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.423843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.423870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.423880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.424100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.424320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.424332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.424340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.424348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.437314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.437895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.437922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.437932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.438153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.438374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.438387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.438395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.438402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.451152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.451727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.451762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.451982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.452203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.452215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.452231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.452239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.464999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.465610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.465635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.465655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.465875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.466095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.466107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.466115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.466122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.478907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.479497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.479523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.479532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.479761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.479984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.479997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.480005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.480014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.492759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.493323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.493349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.493358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.493580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.493819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.493830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.493839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.493848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:55.740 [2024-12-06 18:41:50.506607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:55.740 [2024-12-06 18:41:50.507219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.740 [2024-12-06 18:41:50.507246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:55.740 [2024-12-06 18:41:50.507255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:55.740 [2024-12-06 18:41:50.507476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:55.740 [2024-12-06 18:41:50.507707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:55.740 [2024-12-06 18:41:50.507722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:55.740 [2024-12-06 18:41:50.507730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:55.740 [2024-12-06 18:41:50.507738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.004 [2024-12-06 18:41:50.520488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.521081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.521106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.521115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.521334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.521555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.521566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.521575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.521582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.534347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.534828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.534854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.534863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.535084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.535305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.535316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.535324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.535332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.548285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.548782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.548807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.548822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.549043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.549263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.549276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.549284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.549293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.562178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.562781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.562847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.562864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.563121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.563349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.563364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.563373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.563383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.576165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.576754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.576787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.576796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.577019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.577241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.577254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.577262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.577270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.590028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.590696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.590763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.590777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.591032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.591268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.591280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.591289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.591299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.603838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.604459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.604491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.604500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.604732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.604955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.604967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.604975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.604983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.617704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.618387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.618453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.618466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.618733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.618974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.618987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.618996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.619006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.631530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.632215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.632281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.632295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.632550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.005 [2024-12-06 18:41:50.632795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.005 [2024-12-06 18:41:50.632809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.005 [2024-12-06 18:41:50.632825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.005 [2024-12-06 18:41:50.632835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.005 [2024-12-06 18:41:50.645356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.005 [2024-12-06 18:41:50.646051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.005 [2024-12-06 18:41:50.646115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.005 [2024-12-06 18:41:50.646129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.005 [2024-12-06 18:41:50.646384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.646614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.646627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.646635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.646659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.659185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.659914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.659980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.659994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.660249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.660478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.660490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.660498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.660508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.673070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.673699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.673733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.673742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.673965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.674187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.674199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.674207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.674216] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.686952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.687629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.687707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.687721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.687976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.688204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.688217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.688226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.688235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.700770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.701448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.701512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.701526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.701795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.702024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.702036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.702046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.702055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.714578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.715260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.715327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.715340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.715596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.715838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.715851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.715861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.715870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.728401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.728912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.728944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.728961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.729184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.729407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.729418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.729426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.729434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.742378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.742971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.742999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.743009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.743230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.743465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.743477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.743485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.743493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.756242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.756868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.756896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.756906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.757128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.757349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.757360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.757369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.757377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.770102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.770750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.770816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.006 [2024-12-06 18:41:50.770830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.006 [2024-12-06 18:41:50.771086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.006 [2024-12-06 18:41:50.771337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.006 [2024-12-06 18:41:50.771352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.006 [2024-12-06 18:41:50.771362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.006 [2024-12-06 18:41:50.771373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.006 [2024-12-06 18:41:50.783924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.006 [2024-12-06 18:41:50.784618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.006 [2024-12-06 18:41:50.784698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.007 [2024-12-06 18:41:50.784712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.007 [2024-12-06 18:41:50.784969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.007 [2024-12-06 18:41:50.785196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.007 [2024-12-06 18:41:50.785209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.007 [2024-12-06 18:41:50.785218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.007 [2024-12-06 18:41:50.785228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.797758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.798350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.798382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.798391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.798613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.798850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.798864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.798872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.798881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.811596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.812203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.812230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.812241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.812461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.812690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.812704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.812720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.812728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.825454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.826062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.826089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.826098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.826318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.826540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.826550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.826559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.826568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.839288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.839998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.840064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.840079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.840336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.840564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.840575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.840585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.840595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.853125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.853719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.853752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.853761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.853984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.854206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.854219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.854227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.854235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.866966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.867457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.867482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.269 [2024-12-06 18:41:50.867492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.269 [2024-12-06 18:41:50.867720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.269 [2024-12-06 18:41:50.867943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.269 [2024-12-06 18:41:50.867956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.269 [2024-12-06 18:41:50.867965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.269 [2024-12-06 18:41:50.867975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.269 [2024-12-06 18:41:50.879593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.269 [2024-12-06 18:41:50.880102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.269 [2024-12-06 18:41:50.880127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.880134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.880289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.880443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.880451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.880458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.880464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.892321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.892812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.892833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.892840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.892993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.893146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.893156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.893162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.893168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.905017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.905403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.905421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.905432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.905583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.905741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.905749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.905754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.905760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.917749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.918251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.918269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.918275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.918425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.918578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.918587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.918592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.918598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.930440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.930988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.931031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.931040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.931214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.931373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.931381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.931388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.931394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.943116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.943517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.943537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.943544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.943701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.943860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.943867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.943872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.943878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.955858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.956327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.956344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.956351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.956502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.956659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.956667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.956672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.956677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.968514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.969013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.969029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.969035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.969185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.969336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.969344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.969349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.969355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.981199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.981747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.981783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.981793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.981965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.982120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.982129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.982141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.982148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:50.993880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:50.994498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.270 [2024-12-06 18:41:50.994532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.270 [2024-12-06 18:41:50.994544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.270 [2024-12-06 18:41:50.994720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.270 [2024-12-06 18:41:50.994875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.270 [2024-12-06 18:41:50.994883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.270 [2024-12-06 18:41:50.994889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.270 [2024-12-06 18:41:50.994895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.270 [2024-12-06 18:41:51.006597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.270 [2024-12-06 18:41:51.007133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-12-06 18:41:51.007150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.271 [2024-12-06 18:41:51.007156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.271 [2024-12-06 18:41:51.007307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.271 [2024-12-06 18:41:51.007458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.271 [2024-12-06 18:41:51.007465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.271 [2024-12-06 18:41:51.007472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.271 [2024-12-06 18:41:51.007477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.271 [2024-12-06 18:41:51.019311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.271 [2024-12-06 18:41:51.019915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-12-06 18:41:51.019949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.271 [2024-12-06 18:41:51.019958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.271 [2024-12-06 18:41:51.020125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.271 [2024-12-06 18:41:51.020279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.271 [2024-12-06 18:41:51.020286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.271 [2024-12-06 18:41:51.020292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.271 [2024-12-06 18:41:51.020298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.271 [2024-12-06 18:41:51.032004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.271 [2024-12-06 18:41:51.032558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-12-06 18:41:51.032591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.271 [2024-12-06 18:41:51.032600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.271 [2024-12-06 18:41:51.032776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.271 [2024-12-06 18:41:51.032931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.271 [2024-12-06 18:41:51.032938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.271 [2024-12-06 18:41:51.032944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.271 [2024-12-06 18:41:51.032951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.271 [2024-12-06 18:41:51.044642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.271 [2024-12-06 18:41:51.045213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.271 [2024-12-06 18:41:51.045245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.271 [2024-12-06 18:41:51.045254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.271 [2024-12-06 18:41:51.045422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.271 [2024-12-06 18:41:51.045576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.271 [2024-12-06 18:41:51.045583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.271 [2024-12-06 18:41:51.045589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.271 [2024-12-06 18:41:51.045596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 7572.00 IOPS, 29.58 MiB/s [2024-12-06T17:41:51.318Z] [2024-12-06 18:41:51.057291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.057937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.057969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.057978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.058144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.058297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.058305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.058310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.534 [2024-12-06 18:41:51.058317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 [2024-12-06 18:41:51.070079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.070678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.070710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.070722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.070889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.071042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.071049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.071055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.534 [2024-12-06 18:41:51.071061] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 [2024-12-06 18:41:51.082765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.083315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.083347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.083355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.083521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.083681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.083689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.083695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.534 [2024-12-06 18:41:51.083701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 [2024-12-06 18:41:51.095388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.095814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.095846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.095855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.096022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.096176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.096184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.096190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.534 [2024-12-06 18:41:51.096196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 [2024-12-06 18:41:51.108036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.108612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.108650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.108658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.108825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.108983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.108990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.108996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.534 [2024-12-06 18:41:51.109002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.534 [2024-12-06 18:41:51.120684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.534 [2024-12-06 18:41:51.121263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.534 [2024-12-06 18:41:51.121295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.534 [2024-12-06 18:41:51.121304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.534 [2024-12-06 18:41:51.121470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.534 [2024-12-06 18:41:51.121623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.534 [2024-12-06 18:41:51.121631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.534 [2024-12-06 18:41:51.121643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.121650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.133341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.133973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.134004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.134013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.134179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.134333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.134341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.134347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.134354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.146039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.146599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.146630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.146645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.146812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.146966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.146973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.146983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.146990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.158680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.159253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.159285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.159294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.159461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.159615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.159622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.159628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.159634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.171328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.171935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.171967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.171975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.172141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.172295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.172302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.172308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.172315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.184007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.184460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.184476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.184482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.184632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.184789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.184796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.184802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.184807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.196628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.197245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.197254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.197420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.197574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.197581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.197588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.197594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.209281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.209858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.209889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.209899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.210064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.210218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.210225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.210230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.210237] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.221922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.222512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.222544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.222552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.222725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.222879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.222887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.222892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.222898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.234583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.235179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.235211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.235223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.235389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.235543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.235551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.235557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.535 [2024-12-06 18:41:51.235563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.535 [2024-12-06 18:41:51.247252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.535 [2024-12-06 18:41:51.247788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.535 [2024-12-06 18:41:51.247820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.535 [2024-12-06 18:41:51.247830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.535 [2024-12-06 18:41:51.247999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.535 [2024-12-06 18:41:51.248152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.535 [2024-12-06 18:41:51.248160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.535 [2024-12-06 18:41:51.248166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.248171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.536 [2024-12-06 18:41:51.259858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.536 [2024-12-06 18:41:51.260411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.536 [2024-12-06 18:41:51.260443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.536 [2024-12-06 18:41:51.260452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.536 [2024-12-06 18:41:51.260618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.536 [2024-12-06 18:41:51.260777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.536 [2024-12-06 18:41:51.260785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.536 [2024-12-06 18:41:51.260791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.260797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.536 [2024-12-06 18:41:51.272487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.536 [2024-12-06 18:41:51.273080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.536 [2024-12-06 18:41:51.273111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.536 [2024-12-06 18:41:51.273121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.536 [2024-12-06 18:41:51.273294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.536 [2024-12-06 18:41:51.273452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.536 [2024-12-06 18:41:51.273460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.536 [2024-12-06 18:41:51.273466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.273471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.536 [2024-12-06 18:41:51.285156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.536 [2024-12-06 18:41:51.285714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.536 [2024-12-06 18:41:51.285746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.536 [2024-12-06 18:41:51.285755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.536 [2024-12-06 18:41:51.285924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.536 [2024-12-06 18:41:51.286077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.536 [2024-12-06 18:41:51.286084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.536 [2024-12-06 18:41:51.286090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.286096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.536 [2024-12-06 18:41:51.297784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.536 [2024-12-06 18:41:51.298374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.536 [2024-12-06 18:41:51.298406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.536 [2024-12-06 18:41:51.298415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.536 [2024-12-06 18:41:51.298581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.536 [2024-12-06 18:41:51.298742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.536 [2024-12-06 18:41:51.298750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.536 [2024-12-06 18:41:51.298756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.298762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.536 [2024-12-06 18:41:51.310439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.536 [2024-12-06 18:41:51.310995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.536 [2024-12-06 18:41:51.311027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.536 [2024-12-06 18:41:51.311035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.536 [2024-12-06 18:41:51.311201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.536 [2024-12-06 18:41:51.311355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.536 [2024-12-06 18:41:51.311363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.536 [2024-12-06 18:41:51.311373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.536 [2024-12-06 18:41:51.311379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.798 [2024-12-06 18:41:51.323073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.798 [2024-12-06 18:41:51.323623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-12-06 18:41:51.323660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.798 [2024-12-06 18:41:51.323669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.798 [2024-12-06 18:41:51.323836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.798 [2024-12-06 18:41:51.323990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.798 [2024-12-06 18:41:51.323997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.798 [2024-12-06 18:41:51.324003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.798 [2024-12-06 18:41:51.324010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.798 [2024-12-06 18:41:51.335694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.798 [2024-12-06 18:41:51.336288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-12-06 18:41:51.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.798 [2024-12-06 18:41:51.336328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.798 [2024-12-06 18:41:51.336495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.798 [2024-12-06 18:41:51.336657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.798 [2024-12-06 18:41:51.336665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.798 [2024-12-06 18:41:51.336671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.798 [2024-12-06 18:41:51.336677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.798 [2024-12-06 18:41:51.348353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.798 [2024-12-06 18:41:51.348918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.798 [2024-12-06 18:41:51.348949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.798 [2024-12-06 18:41:51.348958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.798 [2024-12-06 18:41:51.349124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.349278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.349285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.349291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.349297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.360982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.361593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.361602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.361776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.361930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.361938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.361943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.361949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.373645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.374220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.374251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.374260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.374426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.374580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.374587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.374595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.374601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.386286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.386744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.386776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.386785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.386953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.387106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.387113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.387119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.387125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.398956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.399508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.399540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.399552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.399725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.399880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.399887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.399893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.399898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.411593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.412204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.412236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.412245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.412411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.412565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.412573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.412579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.412585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.424267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.424765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.424797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.424806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.424975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.425128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.425136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.425142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.425148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.436976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.437556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.437587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.437596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.437769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.437927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.437934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.437941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.437947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.449626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.450180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.450212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.450221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.450387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.450541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.450549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.450555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.450561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.462254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.462812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.799 [2024-12-06 18:41:51.462821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.799 [2024-12-06 18:41:51.462988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.799 [2024-12-06 18:41:51.463142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.799 [2024-12-06 18:41:51.463149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.799 [2024-12-06 18:41:51.463156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.799 [2024-12-06 18:41:51.463161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.799 [2024-12-06 18:41:51.474867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.799 [2024-12-06 18:41:51.475364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.799 [2024-12-06 18:41:51.475380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.475386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.475536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.475693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.475700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.475709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.475714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.487531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.487955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.487970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.487975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.488125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.488276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.488282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.488287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.488292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.500250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.500873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.500904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.500913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.501079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.501233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.501240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.501246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.501252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.512942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.513428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.513460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.513470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.513636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.513797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.513805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.513810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.513816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.525641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.526230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.526262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.526270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.526436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.526590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.526597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.526603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.526609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.538291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.538903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.538935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.538944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.539109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.539263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.539270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.539276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.539282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.550967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.551521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.551552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.551561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.551734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.551888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.551896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.551902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.551908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.563583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.564192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.564223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.564236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.564402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.564556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.564563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.564569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.564575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.800 [2024-12-06 18:41:51.576285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.800 [2024-12-06 18:41:51.576861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.800 [2024-12-06 18:41:51.576893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:56.800 [2024-12-06 18:41:51.576902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:56.800 [2024-12-06 18:41:51.577068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:56.800 [2024-12-06 18:41:51.577220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.800 [2024-12-06 18:41:51.577228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.800 [2024-12-06 18:41:51.577234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.800 [2024-12-06 18:41:51.577240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.588899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.589501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.589534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.589542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.589716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.589871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.589878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.589884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.589890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.601575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.602205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.602237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.602246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.602412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.602573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.602581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.602587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.602593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.614289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.614902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.614934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.614943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.615108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.615262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.615269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.615276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.615282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.626964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.627426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.627441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.627447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.627597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.627754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.627761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.627767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.627772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.639583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.640166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.640198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.640207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.640373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.640527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.640534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.640544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.640550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.652310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.652917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.652948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.652957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.653123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.653277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.653285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.653290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.653297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.664980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.665537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.665568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.665577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.665750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.064 [2024-12-06 18:41:51.665905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.064 [2024-12-06 18:41:51.665912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.064 [2024-12-06 18:41:51.665918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.064 [2024-12-06 18:41:51.665924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.064 [2024-12-06 18:41:51.677627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.064 [2024-12-06 18:41:51.678227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.064 [2024-12-06 18:41:51.678258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.064 [2024-12-06 18:41:51.678267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.064 [2024-12-06 18:41:51.678433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.678587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.678594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.678600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.678606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.690302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.690890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.690922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.690931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.691097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.691251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.691258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.691264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.691270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.702953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.703502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.703534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.703543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.703716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.703870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.703878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.703884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.703889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.715564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.716150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.716182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.716190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.716356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.716511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.716518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.716523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.716529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.728216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.728737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.728769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.728782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.728951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.729104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.729111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.729117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.729123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.740951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.741402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.741418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.741424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.741574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.741731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.741738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.741744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.741749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.753566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.754223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.754256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.754265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.754432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.754586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.754594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.754600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.754607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.766165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.766682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.766705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.766712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.766869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.767025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.767032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.767038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.767043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.778889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.779262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.779277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.779282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.779433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.779583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.779591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.779595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.779600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.791559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.792115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.792147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.792156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.065 [2024-12-06 18:41:51.792322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.065 [2024-12-06 18:41:51.792476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.065 [2024-12-06 18:41:51.792484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.065 [2024-12-06 18:41:51.792489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.065 [2024-12-06 18:41:51.792495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.065 [2024-12-06 18:41:51.804264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.065 [2024-12-06 18:41:51.804764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.065 [2024-12-06 18:41:51.804796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.065 [2024-12-06 18:41:51.804805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.066 [2024-12-06 18:41:51.804973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.066 [2024-12-06 18:41:51.805127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.066 [2024-12-06 18:41:51.805134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.066 [2024-12-06 18:41:51.805145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.066 [2024-12-06 18:41:51.805151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.066 [2024-12-06 18:41:51.816977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.066 [2024-12-06 18:41:51.817529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-12-06 18:41:51.817560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.066 [2024-12-06 18:41:51.817570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.066 [2024-12-06 18:41:51.817743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.066 [2024-12-06 18:41:51.817897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.066 [2024-12-06 18:41:51.817904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.066 [2024-12-06 18:41:51.817909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.066 [2024-12-06 18:41:51.817916] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.066 [2024-12-06 18:41:51.829614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.066 [2024-12-06 18:41:51.830183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-12-06 18:41:51.830215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.066 [2024-12-06 18:41:51.830224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.066 [2024-12-06 18:41:51.830390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.066 [2024-12-06 18:41:51.830545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.066 [2024-12-06 18:41:51.830552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.066 [2024-12-06 18:41:51.830558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.066 [2024-12-06 18:41:51.830564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.066 [2024-12-06 18:41:51.842257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.066 [2024-12-06 18:41:51.842751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.066 [2024-12-06 18:41:51.842784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.066 [2024-12-06 18:41:51.842793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.066 [2024-12-06 18:41:51.842962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.066 [2024-12-06 18:41:51.843115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.066 [2024-12-06 18:41:51.843123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.066 [2024-12-06 18:41:51.843129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.066 [2024-12-06 18:41:51.843134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.327 [2024-12-06 18:41:51.854974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.327 [2024-12-06 18:41:51.855569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-12-06 18:41:51.855600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.327 [2024-12-06 18:41:51.855609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.327 [2024-12-06 18:41:51.855785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.327 [2024-12-06 18:41:51.855939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.327 [2024-12-06 18:41:51.855946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.327 [2024-12-06 18:41:51.855953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.327 [2024-12-06 18:41:51.855960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.327 [2024-12-06 18:41:51.867643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.327 [2024-12-06 18:41:51.868128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-12-06 18:41:51.868159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.327 [2024-12-06 18:41:51.868169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.327 [2024-12-06 18:41:51.868335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.327 [2024-12-06 18:41:51.868488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.327 [2024-12-06 18:41:51.868496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.327 [2024-12-06 18:41:51.868502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.327 [2024-12-06 18:41:51.868507] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.327 [2024-12-06 18:41:51.880364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.327 [2024-12-06 18:41:51.880847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.327 [2024-12-06 18:41:51.880863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.327 [2024-12-06 18:41:51.880869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.881020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.881170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.881177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.881182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.881187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.893010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.893510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.893520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.893674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.893825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.893832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.893837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.893842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.905662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.906111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.906125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.906130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.906280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.906431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.906439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.906444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.906449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.918263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.918775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.918807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.918816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.918982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.919135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.919143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.919150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.919156] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.930985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.931539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.931571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.931580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.931753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.931911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.931918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.931924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.931931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.943613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.944198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.944230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.944239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.944405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.944558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.944565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.944571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.944577] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.956271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.956921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.956954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.956963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.957130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.957284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.957291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.957297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.957303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.968992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.969609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.969645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.969654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.969820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.969974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.969982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.969991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.969997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.981700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.982304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.982335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.982344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.982510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.982670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.982679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.982685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.982691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:51.994367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:51.994952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:51.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:51.994993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:51.995159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:51.995312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:51.995319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:51.995326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:51.995332] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:52.007013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:52.007536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:52.007568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:52.007577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:52.007752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:52.007907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:52.007914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:52.007920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:52.007926] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:52.019624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:52.020228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:52.020260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:52.020270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:52.020437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:52.020591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:52.020599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:52.020606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:52.020611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:52.032298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:52.032897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:52.032929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:52.032938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:52.033106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:52.033259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:52.033268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:52.033274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:52.033280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 [2024-12-06 18:41:52.044971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:52.045462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:52.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:52.045483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:52.045634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:52.045790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:52.045797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:52.045802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:52.045807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.328 6057.60 IOPS, 23.66 MiB/s [2024-12-06T17:41:52.112Z] [2024-12-06 18:41:52.058764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.328 [2024-12-06 18:41:52.059250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.328 [2024-12-06 18:41:52.059264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.328 [2024-12-06 18:41:52.059273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.328 [2024-12-06 18:41:52.059423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.328 [2024-12-06 18:41:52.059573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.328 [2024-12-06 18:41:52.059582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.328 [2024-12-06 18:41:52.059588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.328 [2024-12-06 18:41:52.059593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.329 [2024-12-06 18:41:52.071430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.329 [2024-12-06 18:41:52.071992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-12-06 18:41:52.072025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.329 [2024-12-06 18:41:52.072035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.329 [2024-12-06 18:41:52.072202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.329 [2024-12-06 18:41:52.072356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.329 [2024-12-06 18:41:52.072364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.329 [2024-12-06 18:41:52.072370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.329 [2024-12-06 18:41:52.072377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.329 [2024-12-06 18:41:52.084092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.329 [2024-12-06 18:41:52.084717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-12-06 18:41:52.084750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.329 [2024-12-06 18:41:52.084759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.329 [2024-12-06 18:41:52.084924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.329 [2024-12-06 18:41:52.085078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.329 [2024-12-06 18:41:52.085086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.329 [2024-12-06 18:41:52.085092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.329 [2024-12-06 18:41:52.085098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.329 [2024-12-06 18:41:52.096800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.329 [2024-12-06 18:41:52.097167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.329 [2024-12-06 18:41:52.097184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.329 [2024-12-06 18:41:52.097189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.329 [2024-12-06 18:41:52.097339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.329 [2024-12-06 18:41:52.097494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.329 [2024-12-06 18:41:52.097501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.329 [2024-12-06 18:41:52.097507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.329 [2024-12-06 18:41:52.097512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.329 [2024-12-06 18:41:52.109487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.589 [2024-12-06 18:41:52.110015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-06 18:41:52.110048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.589 [2024-12-06 18:41:52.110057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.589 [2024-12-06 18:41:52.110223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.589 [2024-12-06 18:41:52.110377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.589 [2024-12-06 18:41:52.110385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.589 [2024-12-06 18:41:52.110391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.589 [2024-12-06 18:41:52.110397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.589 [2024-12-06 18:41:52.122091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.589 [2024-12-06 18:41:52.122547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-06 18:41:52.122563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.589 [2024-12-06 18:41:52.122569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.589 [2024-12-06 18:41:52.122724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.589 [2024-12-06 18:41:52.122876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.589 [2024-12-06 18:41:52.122883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.589 [2024-12-06 18:41:52.122889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.589 [2024-12-06 18:41:52.122894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.589 [2024-12-06 18:41:52.134751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.589 [2024-12-06 18:41:52.135205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-06 18:41:52.135218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.589 [2024-12-06 18:41:52.135224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.589 [2024-12-06 18:41:52.135374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.589 [2024-12-06 18:41:52.135525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.589 [2024-12-06 18:41:52.135531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.589 [2024-12-06 18:41:52.135540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.589 [2024-12-06 18:41:52.135546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.589 [2024-12-06 18:41:52.147376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.589 [2024-12-06 18:41:52.147949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-06 18:41:52.147981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.589 [2024-12-06 18:41:52.147990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.589 [2024-12-06 18:41:52.148156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.589 [2024-12-06 18:41:52.148309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.589 [2024-12-06 18:41:52.148317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.589 [2024-12-06 18:41:52.148323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.589 [2024-12-06 18:41:52.148330] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.589 [2024-12-06 18:41:52.160056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.589 [2024-12-06 18:41:52.160557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.589 [2024-12-06 18:41:52.160575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.589 [2024-12-06 18:41:52.160580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.160735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.160886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.160892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.160898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.160903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.172729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.173321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.173352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.173362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.173528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.173688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.173695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.173701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.173707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.185418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.185935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.185941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.186091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.186242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.186249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.186254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.186259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.198087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.198604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.198642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.198652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.198818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.198971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.198979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.198985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.198991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.210686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.211243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.211275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.211283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.211449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.211603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.211611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.211617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.211624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.223317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.223928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.223960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.223976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.224141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.224295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.224303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.224309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.224317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.236011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.236605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.236643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.236652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.236820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.236973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.236980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.236986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.236992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.248691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.249278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.249311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.249320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.249486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.249645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.249652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.249658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.249664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.261348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.261933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.261965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.261975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.262141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.262299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.262307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.262313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.262319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.274024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.274471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.274487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.274493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.274656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.274807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.274814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.274820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.274825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.286663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.287291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.287300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.287466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.287619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.287627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.287633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.287645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.299334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.299800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.299816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.299822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.299972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.300123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.300129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.300138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.300144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.311968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.312448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.312462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.312468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.312618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.312774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.312780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.312786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.312791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.324621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.325028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.325041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.590 [2024-12-06 18:41:52.325047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.590 [2024-12-06 18:41:52.325196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.590 [2024-12-06 18:41:52.325346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.590 [2024-12-06 18:41:52.325353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.590 [2024-12-06 18:41:52.325358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.590 [2024-12-06 18:41:52.325363] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.590 [2024-12-06 18:41:52.337324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.590 [2024-12-06 18:41:52.337806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.590 [2024-12-06 18:41:52.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.591 [2024-12-06 18:41:52.337825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.591 [2024-12-06 18:41:52.337975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.591 [2024-12-06 18:41:52.338125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.591 [2024-12-06 18:41:52.338132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.591 [2024-12-06 18:41:52.338137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.591 [2024-12-06 18:41:52.338142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.591 [2024-12-06 18:41:52.349972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.591 [2024-12-06 18:41:52.350420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-06 18:41:52.350433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.591 [2024-12-06 18:41:52.350439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.591 [2024-12-06 18:41:52.350588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.591 [2024-12-06 18:41:52.350743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.591 [2024-12-06 18:41:52.350750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.591 [2024-12-06 18:41:52.350755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.591 [2024-12-06 18:41:52.350760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.591 [2024-12-06 18:41:52.362581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.591 [2024-12-06 18:41:52.363038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.591 [2024-12-06 18:41:52.363051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.591 [2024-12-06 18:41:52.363057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.591 [2024-12-06 18:41:52.363207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.591 [2024-12-06 18:41:52.363357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.591 [2024-12-06 18:41:52.363364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.591 [2024-12-06 18:41:52.363369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.591 [2024-12-06 18:41:52.363374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.375203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.375682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.375695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.375701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.375851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.376002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.376009] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.376014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.376019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.387851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.388339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.388352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.388361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.388511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.388668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.388676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.388681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.388685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.400516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.400988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.401001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.401007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.401156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.401307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.401313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.401318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.401323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.413154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.413855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.413876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.413882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.414038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.414190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.414197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.414202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.414207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.425752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.426244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.426257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.426263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.426412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.426566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.426573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.426578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.426583] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.438422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.438910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.438924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.438929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.439079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.439230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.439236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.439241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.439246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.451063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.451506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.451519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.451524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.451678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.451829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.451836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.451841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.451846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.463664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.464146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.464159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.464164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.464314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.464465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.464470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.464478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.464484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.476316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.476779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.852 [2024-12-06 18:41:52.476811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.852 [2024-12-06 18:41:52.476820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.852 [2024-12-06 18:41:52.476988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.852 [2024-12-06 18:41:52.477142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.852 [2024-12-06 18:41:52.477150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.852 [2024-12-06 18:41:52.477156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.852 [2024-12-06 18:41:52.477163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.852 [2024-12-06 18:41:52.489001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.852 [2024-12-06 18:41:52.489589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.489620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.489630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.489803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.489958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.489965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.489971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.489977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.501669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.502273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.502305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.502314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.502480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.502634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.502646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.502652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.502658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.514358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.514981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.515014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.515022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.515188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.515342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.515350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.515355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.515361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.527055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.527508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.527524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.527530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.527685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.527836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.527843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.527848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.527853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.539701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.540172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.540186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.540192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.540341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.540492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.540499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.540504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.540509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.552335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.552857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.552865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.553016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.553166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.553173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.553178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.553183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.565003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.565493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.565506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.565512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.565666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.565816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.565823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.565829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.565833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.577681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.578035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.578048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.578054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.578204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.578355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.578361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.578366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.578371] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.590346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.590798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.590812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.590817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.590967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.591120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.591126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.591131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.591136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.602980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.603421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.603435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.603440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.603590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.603745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.603752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.603757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.603763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.615593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.615981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.615994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.616000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.616149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.616299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.616306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.853 [2024-12-06 18:41:52.616311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.853 [2024-12-06 18:41:52.616316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.853 [2024-12-06 18:41:52.628290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.853 [2024-12-06 18:41:52.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.853 [2024-12-06 18:41:52.628757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:57.853 [2024-12-06 18:41:52.628762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:57.853 [2024-12-06 18:41:52.628912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:57.853 [2024-12-06 18:41:52.629063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.853 [2024-12-06 18:41:52.629069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.854 [2024-12-06 18:41:52.629077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.854 [2024-12-06 18:41:52.629083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.640914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.641363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.641377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.641382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.641531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.641687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.641694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.641699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.641704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.653537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.653963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.653977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.653983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.654132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.654283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.654289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.654295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.654300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.666128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.666564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.666576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.666582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.666735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.666886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.666892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.666898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.666902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.678755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.679235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.679249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.679254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.679404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.679554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.679560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.679566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.679570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.691402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.691817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.691831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.691836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.691986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.692136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.692143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.692148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.692152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.704123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.704606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.704619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.704624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.704778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.704929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.704936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.704941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.704945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 [2024-12-06 18:41:52.716786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.717223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.114 [2024-12-06 18:41:52.717237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.114 [2024-12-06 18:41:52.717245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.114 [2024-12-06 18:41:52.717395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.114 [2024-12-06 18:41:52.717545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.114 [2024-12-06 18:41:52.717552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.114 [2024-12-06 18:41:52.717558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.114 [2024-12-06 18:41:52.717562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2316848 Killed "${NVMF_APP[@]}" "$@" 00:29:58.114 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:58.114 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:58.114 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.114 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.114 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.114 [2024-12-06 18:41:52.729399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.114 [2024-12-06 18:41:52.729834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.729847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.729853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.730003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.730152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.730160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.730165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.730169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2318533 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2318533 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2318533 ']' 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.115 18:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.115 [2024-12-06 18:41:52.742040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.742525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.742542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.742547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.742702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.742853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.742860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.742867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.742872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.754714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.755175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.755188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.755195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.755344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.755494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.755502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.755507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.755512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.767347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.767963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.767996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.768005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.768172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.768326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.768333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.768339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.768345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.780063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.780544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.780560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.780566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.780724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.780876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.780882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.780888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.780893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.784674] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:29:58.115 [2024-12-06 18:41:52.784722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.115 [2024-12-06 18:41:52.792716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.793269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.793301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.793311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.793478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.793631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.793644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.793651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.793657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.805340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.805950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.805982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.805991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.806157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.806310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.806317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.806323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.806329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.818019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.818586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.818617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.818626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.818803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.818958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.818965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.115 [2024-12-06 18:41:52.818971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.115 [2024-12-06 18:41:52.818976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.115 [2024-12-06 18:41:52.830754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.115 [2024-12-06 18:41:52.831282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.115 [2024-12-06 18:41:52.831314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.115 [2024-12-06 18:41:52.831323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.115 [2024-12-06 18:41:52.831489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.115 [2024-12-06 18:41:52.831651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.115 [2024-12-06 18:41:52.831659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.831665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.831670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.116 [2024-12-06 18:41:52.843351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.116 [2024-12-06 18:41:52.843877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.116 [2024-12-06 18:41:52.843908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.116 [2024-12-06 18:41:52.843917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.116 [2024-12-06 18:41:52.844084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.116 [2024-12-06 18:41:52.844237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.116 [2024-12-06 18:41:52.844244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.844250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.844257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.116 [2024-12-06 18:41:52.855949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.116 [2024-12-06 18:41:52.856536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.116 [2024-12-06 18:41:52.856568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.116 [2024-12-06 18:41:52.856577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.116 [2024-12-06 18:41:52.856750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.116 [2024-12-06 18:41:52.856904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.116 [2024-12-06 18:41:52.856915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.856921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.856927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.116 [2024-12-06 18:41:52.868608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.116 [2024-12-06 18:41:52.869156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.116 [2024-12-06 18:41:52.869188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.116 [2024-12-06 18:41:52.869197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.116 [2024-12-06 18:41:52.869363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.116 [2024-12-06 18:41:52.869517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.116 [2024-12-06 18:41:52.869524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.869530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.869535] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.116 [2024-12-06 18:41:52.874605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:58.116 [2024-12-06 18:41:52.881248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.116 [2024-12-06 18:41:52.881773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.116 [2024-12-06 18:41:52.881805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.116 [2024-12-06 18:41:52.881814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.116 [2024-12-06 18:41:52.881983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.116 [2024-12-06 18:41:52.882137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.116 [2024-12-06 18:41:52.882144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.882150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.882155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.116 [2024-12-06 18:41:52.893994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.116 [2024-12-06 18:41:52.894346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.116 [2024-12-06 18:41:52.894361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.116 [2024-12-06 18:41:52.894367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.116 [2024-12-06 18:41:52.894517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.116 [2024-12-06 18:41:52.894673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.116 [2024-12-06 18:41:52.894681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.116 [2024-12-06 18:41:52.894690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.116 [2024-12-06 18:41:52.894695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.903723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.377 [2024-12-06 18:41:52.903746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.377 [2024-12-06 18:41:52.903753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.377 [2024-12-06 18:41:52.903758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.377 [2024-12-06 18:41:52.903763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.377 [2024-12-06 18:41:52.904853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.377 [2024-12-06 18:41:52.905059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.377 [2024-12-06 18:41:52.905060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:58.377 [2024-12-06 18:41:52.906665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.907153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.907166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.907172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.907323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.907474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.907480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.907486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.907491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.919334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.919933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.919968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.919977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.920150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.920303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.920311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.920317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.920324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.932018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.932644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.932678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.932691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.932861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.933015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.933022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.933028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.933034] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.944712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.945320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.945354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.945364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.945532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.945692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.945699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.945705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.945711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.957394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.957883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.957900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.957905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.958056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.958208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.958215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.958220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.958225] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.970043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.970537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.970551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.970557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.970712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.970871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.970878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.970883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.970888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.982731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.983350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.983383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.983392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.983559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.983718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.983726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.983732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.983739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:52.995425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:52.995826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:52.995842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:52.995848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:52.995999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:52.996150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:52.996157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:52.996162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:52.996167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:53.008128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:53.008596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:53.008610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:53.008615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:53.008769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:53.008920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:53.008927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:53.008932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:53.008941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.377 [2024-12-06 18:41:53.020763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.377 [2024-12-06 18:41:53.021229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.377 [2024-12-06 18:41:53.021243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.377 [2024-12-06 18:41:53.021249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.377 [2024-12-06 18:41:53.021398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.377 [2024-12-06 18:41:53.021550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.377 [2024-12-06 18:41:53.021556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.377 [2024-12-06 18:41:53.021562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.377 [2024-12-06 18:41:53.021567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.033389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.034001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.034033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.034042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.034208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.034362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.034370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.034377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.034383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.046071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.046632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.046670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.046678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.046844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.046998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.047005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.047011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.047017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 5048.00 IOPS, 19.72 MiB/s [2024-12-06T17:41:53.162Z] [2024-12-06 18:41:53.059837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.060342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.060373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.060381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.060549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.060708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.060715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.060721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.060727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.072562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.073105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.073120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.073126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.073276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.073427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.073432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.073438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.073442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.085285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.085730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.085760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.085769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.085937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.086090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.086097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.086102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.086108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.097943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.098373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.098388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.098398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.098548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.098703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.098709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.098715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.098720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.110564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.111127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.111158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.111167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.111333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.111485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.111492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.111497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.111503] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.123188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.123635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.123673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.123681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.123846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.124000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.124006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.124012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.124017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.135838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.136272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.136287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.136293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.136443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.136597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.136602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.136607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.136612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.378 [2024-12-06 18:41:53.148568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.378 [2024-12-06 18:41:53.149014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.378 [2024-12-06 18:41:53.149044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.378 [2024-12-06 18:41:53.149053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.378 [2024-12-06 18:41:53.149219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.378 [2024-12-06 18:41:53.149372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.378 [2024-12-06 18:41:53.149378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.378 [2024-12-06 18:41:53.149384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.378 [2024-12-06 18:41:53.149390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.640 [2024-12-06 18:41:53.161220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.640 [2024-12-06 18:41:53.161686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.640 [2024-12-06 18:41:53.161702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.640 [2024-12-06 18:41:53.161707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.640 [2024-12-06 18:41:53.161858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.640 [2024-12-06 18:41:53.162007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.640 [2024-12-06 18:41:53.162013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.640 [2024-12-06 18:41:53.162018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.640 [2024-12-06 18:41:53.162023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.640 [2024-12-06 18:41:53.173846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.640 [2024-12-06 18:41:53.174360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.640 [2024-12-06 18:41:53.174390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.640 [2024-12-06 18:41:53.174399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.640 [2024-12-06 18:41:53.174566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.640 [2024-12-06 18:41:53.174725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.640 [2024-12-06 18:41:53.174732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.640 [2024-12-06 18:41:53.174742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.640 [2024-12-06 18:41:53.174748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.640 [2024-12-06 18:41:53.186442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.640 [2024-12-06 18:41:53.186968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.640 [2024-12-06 18:41:53.186999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.640 [2024-12-06 18:41:53.187008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.640 [2024-12-06 18:41:53.187177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.640 [2024-12-06 18:41:53.187331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.640 [2024-12-06 18:41:53.187337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.640 [2024-12-06 18:41:53.187343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.640 [2024-12-06 18:41:53.187348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.640 [2024-12-06 18:41:53.199178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.640 [2024-12-06 18:41:53.199575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.640 [2024-12-06 18:41:53.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.640 [2024-12-06 18:41:53.199595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.640 [2024-12-06 18:41:53.199749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.640 [2024-12-06 18:41:53.199900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.640 [2024-12-06 18:41:53.199906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.640 [2024-12-06 18:41:53.199911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.640 [2024-12-06 18:41:53.199915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.640 [2024-12-06 18:41:53.211871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.640 [2024-12-06 18:41:53.212285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.640 [2024-12-06 18:41:53.212298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.212303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.212452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.212602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.212608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.212613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.212618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.224574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.225088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.225118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.225127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.225293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.225446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.225452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.225458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.225464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.237290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.237750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.237766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.237772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.237922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.238072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.238078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.238083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.238087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.249908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.250416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.250447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.250455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.250622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.250781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.250789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.250794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.250800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.262620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.263152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.263183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.263195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.263361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.263513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.263521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.263527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.263534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.275222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.275879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.275909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.275918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.276084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.276237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.276243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.276248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.276254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.287956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.288514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.288545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.288554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.288725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.288879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.288885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.288890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.288896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.300570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.301169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.301200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.301208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.301374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.301531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.301537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.301543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.301548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.313230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.313776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.313784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.313952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.314106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.314112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.314118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.314124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.325947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.326207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.326221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.326227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.326377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.326527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.326533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.326538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.326542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.338655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.339172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.339203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.339211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.339377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.641 [2024-12-06 18:41:53.339530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.641 [2024-12-06 18:41:53.339537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.641 [2024-12-06 18:41:53.339546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.641 [2024-12-06 18:41:53.339552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.641 [2024-12-06 18:41:53.351366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.641 [2024-12-06 18:41:53.351917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.641 [2024-12-06 18:41:53.351947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.641 [2024-12-06 18:41:53.351956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.641 [2024-12-06 18:41:53.352122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.352275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.352281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.352287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.352292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.642 [2024-12-06 18:41:53.363974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.642 [2024-12-06 18:41:53.364540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.642 [2024-12-06 18:41:53.364571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.642 [2024-12-06 18:41:53.364580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.642 [2024-12-06 18:41:53.364752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.364906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.364912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.364917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.364923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.642 [2024-12-06 18:41:53.376599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.642 [2024-12-06 18:41:53.377125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.642 [2024-12-06 18:41:53.377155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.642 [2024-12-06 18:41:53.377164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.642 [2024-12-06 18:41:53.377330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.377482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.377489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.377495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.377500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.642 [2024-12-06 18:41:53.389332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.642 [2024-12-06 18:41:53.389908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.642 [2024-12-06 18:41:53.389939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.642 [2024-12-06 18:41:53.389948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.642 [2024-12-06 18:41:53.390114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.390267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.390273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.390279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.390285] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.642 [2024-12-06 18:41:53.401969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.642 [2024-12-06 18:41:53.402408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.642 [2024-12-06 18:41:53.402438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.642 [2024-12-06 18:41:53.402447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.642 [2024-12-06 18:41:53.402615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.402774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.402781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.402787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.402793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.642 [2024-12-06 18:41:53.414618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.642 [2024-12-06 18:41:53.415154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.642 [2024-12-06 18:41:53.415185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.642 [2024-12-06 18:41:53.415193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.642 [2024-12-06 18:41:53.415360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.642 [2024-12-06 18:41:53.415512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.642 [2024-12-06 18:41:53.415518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.642 [2024-12-06 18:41:53.415524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.642 [2024-12-06 18:41:53.415529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.427221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.427645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.427660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.427670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.427820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.427970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.427976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.427981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.427986] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.439815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.440158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.440171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.440176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.440326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.440475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.440481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.440486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.440490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.452453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.452982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.453013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.453022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.453187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.453340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.453346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.453352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.453358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.465048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.465606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.465644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.465654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.465819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.465976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.465982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.465987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.465993] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.477684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.478301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.478331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.478340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.478506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.478667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.478675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.478680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.478686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.490377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.490777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.490807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.490816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.904 [2024-12-06 18:41:53.490982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.904 [2024-12-06 18:41:53.491135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.904 [2024-12-06 18:41:53.491142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.904 [2024-12-06 18:41:53.491147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.904 [2024-12-06 18:41:53.491153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.904 [2024-12-06 18:41:53.502999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.904 [2024-12-06 18:41:53.503477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.904 [2024-12-06 18:41:53.503492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.904 [2024-12-06 18:41:53.503498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.503652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.503803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.503809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.503818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.503824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.515655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.516087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.516100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.516105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.516255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.516404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.516410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.516415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.516420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.528244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.528881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.528912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.528922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.529090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.529243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.529250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.529256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.529261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.540960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.541437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.541453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.541459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.541610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.541766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.541772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.541778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.541783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.553608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.554152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.554183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.554192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.554358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.554511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.554518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.554523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.554530] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.566295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.566521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.566536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.566541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.566695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.566846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.566852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.566857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.566861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.578978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.579408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.579439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.579448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.579614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.579774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.579781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.579786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.579792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.905 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:58.905 [2024-12-06 18:41:53.591628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:58.905 [2024-12-06 18:41:53.592162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:58.905 [2024-12-06 18:41:53.592193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.592202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.592368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.905 [2024-12-06 18:41:53.592521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.592528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.592533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.592539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.604234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.604768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.604800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.604809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.604981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.605134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.605141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.605146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.605152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.616848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.617349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.617364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.617370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.617520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.617675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.617681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.617686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.617690] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.905 [2024-12-06 18:41:53.629515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.905 [2024-12-06 18:41:53.629986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.905 [2024-12-06 18:41:53.630006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.905 [2024-12-06 18:41:53.630012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.905 [2024-12-06 18:41:53.630162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.905 [2024-12-06 18:41:53.630312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.905 [2024-12-06 18:41:53.630318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.905 [2024-12-06 18:41:53.630323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.905 [2024-12-06 18:41:53.630328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.906 [2024-12-06 18:41:53.638332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.906 [2024-12-06 18:41:53.642153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.906 [2024-12-06 18:41:53.642685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.906 [2024-12-06 18:41:53.642716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.906 [2024-12-06 18:41:53.642725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.906 [2024-12-06 18:41:53.642892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.906 [2024-12-06 18:41:53.643045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.906 [2024-12-06 18:41:53.643051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.906 [2024-12-06 18:41:53.643056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.906 [2024-12-06 18:41:53.643062] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.906 [2024-12-06 18:41:53.654754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.906 [2024-12-06 18:41:53.655326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.906 [2024-12-06 18:41:53.655356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.906 [2024-12-06 18:41:53.655365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.906 [2024-12-06 18:41:53.655531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.906 [2024-12-06 18:41:53.655692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.906 [2024-12-06 18:41:53.655699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.906 [2024-12-06 18:41:53.655709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.906 [2024-12-06 18:41:53.655714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.906 [2024-12-06 18:41:53.667390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.906 [2024-12-06 18:41:53.668020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.906 [2024-12-06 18:41:53.668051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.906 [2024-12-06 18:41:53.668060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.906 [2024-12-06 18:41:53.668227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.906 [2024-12-06 18:41:53.668380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.906 [2024-12-06 18:41:53.668386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.906 [2024-12-06 18:41:53.668391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.906 [2024-12-06 18:41:53.668397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.906 Malloc0 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.906 [2024-12-06 18:41:53.680095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.906 [2024-12-06 18:41:53.680594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.906 [2024-12-06 18:41:53.680609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:58.906 [2024-12-06 18:41:53.680614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:58.906 [2024-12-06 18:41:53.680777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:58.906 [2024-12-06 18:41:53.680928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.906 [2024-12-06 18:41:53.680933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.906 [2024-12-06 18:41:53.680939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.906 [2024-12-06 18:41:53.680943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.906 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.166 [2024-12-06 18:41:53.692840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.166 [2024-12-06 18:41:53.693455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.166 [2024-12-06 18:41:53.693485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1599c20 with addr=10.0.0.2, port=4420 00:29:59.166 [2024-12-06 18:41:53.693498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1599c20 is same with the state(6) to be set 00:29:59.166 [2024-12-06 18:41:53.693670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1599c20 (9): Bad file descriptor 00:29:59.166 [2024-12-06 18:41:53.693824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.166 [2024-12-06 18:41:53.693830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.166 [2024-12-06 18:41:53.693836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.166 [2024-12-06 18:41:53.693842] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.166 [2024-12-06 18:41:53.701416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:59.166 [2024-12-06 18:41:53.705525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.166 18:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2317220 00:29:59.166 [2024-12-06 18:41:53.730521] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:00.368 4926.57 IOPS, 19.24 MiB/s [2024-12-06T17:41:56.096Z] 5921.75 IOPS, 23.13 MiB/s [2024-12-06T17:41:57.479Z] 6706.44 IOPS, 26.20 MiB/s [2024-12-06T17:41:58.417Z] 7331.70 IOPS, 28.64 MiB/s [2024-12-06T17:41:59.357Z] 7840.27 IOPS, 30.63 MiB/s [2024-12-06T17:42:00.298Z] 8282.83 IOPS, 32.35 MiB/s [2024-12-06T17:42:01.237Z] 8636.92 IOPS, 33.74 MiB/s [2024-12-06T17:42:02.177Z] 8936.71 IOPS, 34.91 MiB/s [2024-12-06T17:42:02.177Z] 9197.60 IOPS, 35.93 MiB/s 00:30:07.393 Latency(us) 00:30:07.393 [2024-12-06T17:42:02.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.393 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:07.393 Verification LBA range: start 0x0 length 0x4000 00:30:07.393 Nvme1n1 : 15.01 9199.75 35.94 13025.85 0.00 5740.20 552.96 23920.64 00:30:07.393 [2024-12-06T17:42:02.177Z] =================================================================================================================== 00:30:07.393 [2024-12-06T17:42:02.177Z] Total : 9199.75 35.94 13025.85 0.00 5740.20 552.96 23920.64 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.653 rmmod nvme_tcp 00:30:07.653 rmmod nvme_fabrics 00:30:07.653 rmmod nvme_keyring 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:07.653 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2318533 ']' 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2318533 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2318533 ']' 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2318533 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2318533 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2318533' 00:30:07.654 killing process with pid 2318533 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2318533 00:30:07.654 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2318533 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.914 18:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.827 00:30:09.827 real 0m28.126s 00:30:09.827 user 1m2.789s 00:30:09.827 sys 0m7.746s 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.827 ************************************ 00:30:09.827 END TEST nvmf_bdevperf 00:30:09.827 ************************************ 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.827 18:42:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.088 ************************************ 00:30:10.088 START TEST nvmf_target_disconnect 00:30:10.088 ************************************ 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:10.088 * Looking for test storage... 00:30:10.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:10.088 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.089 --rc genhtml_branch_coverage=1 00:30:10.089 --rc genhtml_function_coverage=1 00:30:10.089 --rc genhtml_legend=1 00:30:10.089 --rc geninfo_all_blocks=1 00:30:10.089 --rc geninfo_unexecuted_blocks=1 00:30:10.089 00:30:10.089 ' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.089 --rc genhtml_branch_coverage=1 00:30:10.089 --rc genhtml_function_coverage=1 00:30:10.089 --rc genhtml_legend=1 00:30:10.089 --rc geninfo_all_blocks=1 00:30:10.089 --rc geninfo_unexecuted_blocks=1 00:30:10.089 00:30:10.089 ' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.089 --rc genhtml_branch_coverage=1 00:30:10.089 --rc genhtml_function_coverage=1 00:30:10.089 --rc genhtml_legend=1 00:30:10.089 --rc geninfo_all_blocks=1 00:30:10.089 --rc geninfo_unexecuted_blocks=1 00:30:10.089 00:30:10.089 ' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:10.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:10.089 --rc genhtml_branch_coverage=1 00:30:10.089 --rc genhtml_function_coverage=1 00:30:10.089 --rc genhtml_legend=1 00:30:10.089 --rc geninfo_all_blocks=1 00:30:10.089 --rc geninfo_unexecuted_blocks=1 00:30:10.089 00:30:10.089 ' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:10.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:10.089 18:42:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:18.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:18.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.228 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:18.229 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:18.229 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:18.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:30:18.229 00:30:18.229 --- 10.0.0.2 ping statistics --- 00:30:18.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.229 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:30:18.229 00:30:18.229 --- 10.0.0.1 ping statistics --- 00:30:18.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.229 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.229 ************************************ 00:30:18.229 START TEST nvmf_target_disconnect_tc1 00:30:18.229 ************************************ 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:18.229 [2024-12-06 18:42:12.643556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:18.229 [2024-12-06 18:42:12.643670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7bae0 with addr=10.0.0.2, port=4420 00:30:18.229 [2024-12-06 18:42:12.643707] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:18.229 [2024-12-06 18:42:12.643719] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:18.229 [2024-12-06 18:42:12.643727] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:18.229 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:18.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:18.229 Initializing NVMe Controllers 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:18.229 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:18.229 00:30:18.229 real 0m0.141s 00:30:18.229 user 0m0.061s 00:30:18.229 sys 0m0.079s 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:18.230 ************************************ 00:30:18.230 END TEST nvmf_target_disconnect_tc1 00:30:18.230 ************************************ 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:18.230 ************************************ 00:30:18.230 START TEST nvmf_target_disconnect_tc2 00:30:18.230 ************************************ 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2325164 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2325164 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2325164 ']' 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:18.230 18:42:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:18.230 [2024-12-06 18:42:12.803633] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:30:18.230 [2024-12-06 18:42:12.803706] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.230 [2024-12-06 18:42:12.903876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:18.230 [2024-12-06 18:42:12.956194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:18.230 [2024-12-06 18:42:12.956249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:18.230 [2024-12-06 18:42:12.956258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:18.230 [2024-12-06 18:42:12.956265] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:18.230 [2024-12-06 18:42:12.956271] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:18.230 [2024-12-06 18:42:12.958473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:18.230 [2024-12-06 18:42:12.958636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:18.230 [2024-12-06 18:42:12.958798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:18.230 [2024-12-06 18:42:12.958897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 Malloc0 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 [2024-12-06 18:42:13.728431] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 [2024-12-06 18:42:13.768846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2325296 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:19.176 18:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.118 18:42:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2325164 00:30:21.118 18:42:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 [2024-12-06 18:42:15.808376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 [2024-12-06 18:42:15.808771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Write completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 Read completed with error (sct=0, sc=8) 00:30:21.118 starting I/O failed 00:30:21.118 [2024-12-06 18:42:15.809076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:21.118 [2024-12-06 18:42:15.809514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.118 [2024-12-06 18:42:15.809544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.809978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.810038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.810433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.810448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.810836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.810851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.811137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.811150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.811373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.811384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.811606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.811620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.812130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.812191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.812540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.812555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.812991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.813051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.813408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.813422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.813862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.813935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.814301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.814316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.814533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.814545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.814801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.814814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.815156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.815543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.815555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.815737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.815749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.816038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.816050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.816381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.816394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.816730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.816743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.817045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.817057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.817365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.817377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.817579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.817591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.817930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.818245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.818257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.818618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.818632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.818954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.818966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.819292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.819607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.819619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.819917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.819929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.820219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.820231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.820468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.820480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.820775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.820789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.821101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.821113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.821414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.821426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.821759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.821772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.822107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.119 [2024-12-06 18:42:15.822119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.119 qpair failed and we were unable to recover it. 00:30:21.119 [2024-12-06 18:42:15.822465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.822477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.822807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.822818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.823071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.823083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.823294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.823601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.823612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.823919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.823932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.824226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.824238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.824540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.824552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.824864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.824876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.825214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.825228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.825584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.825597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.825926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.825938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.826127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.826139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.826485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.826499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.826849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.826860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.827237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.827249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.827615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.827627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.827881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.827893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.828200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.828210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.828407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.828421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.828693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.828704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.829032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.829042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.829405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.829418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.829759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.830094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.830104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.830425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.830436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.830772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.830782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.831090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.831102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.831313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.831326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.831635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.831654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.832009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.832022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.832340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.832353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.832680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.832693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.833009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.833022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.833416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.833429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.833739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.833752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.834013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.834026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.834391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.120 [2024-12-06 18:42:15.834406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.120 qpair failed and we were unable to recover it. 00:30:21.120 [2024-12-06 18:42:15.834710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.834724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.835050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.835063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.835372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.835395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.835753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.835766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.836072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.836092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.836408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.836424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.836751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.836765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.837130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.837144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.837458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.837472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.837796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.837810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.838055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.838068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.838406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.838422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.838664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.838678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.839023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.839037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.839345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.839359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.839675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.839696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.839898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.839911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.840251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.840591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.840915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.840930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.841241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.841255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.841597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.841610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.841956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.841971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.842279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.842292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.842596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.842610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.842928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.842947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.843284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.843304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.843614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.843632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.843869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.843888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.844121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.844141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.844514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.844532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.844865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.844882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.845215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.845546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.845564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.845780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.845799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.846142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.846160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.846401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.846418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.121 [2024-12-06 18:42:15.846628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.121 [2024-12-06 18:42:15.846651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.121 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.846980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.846999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.847344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.847361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.847653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.848017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.848035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.848380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.848397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.848749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.848767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.848994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.849011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.849338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.849355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.849681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.849701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.850031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.850050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.850321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.850339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.850656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.850673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.851008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.851024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.851357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.851375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.851702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.851719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.852067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.852084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.852389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.852408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.852725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.852748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.853062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.853079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.853389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.853407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.853729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.853746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.854058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.854074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.854417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.854440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.854777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.854800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.855148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.855169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.855547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.855948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.856209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.856230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.856549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.856912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.856934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.857291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.122 [2024-12-06 18:42:15.857313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.122 qpair failed and we were unable to recover it. 00:30:21.122 [2024-12-06 18:42:15.857697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.857727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.858069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.858089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.858450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.858472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.858860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.858883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.859102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.859123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.859337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.859361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.859626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.859657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.859888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.860233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.860254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.860579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.860601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.860919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.860941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.861273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.861298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.861646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.861670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.861999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.862021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.862279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.862300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.862644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.862667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.862916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.862938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.863288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.863309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.863665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.863689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.863899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.863921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.864258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.864279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.864625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.864654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.864991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.865012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.865373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.865681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.865716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.866051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.866080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.866456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.866491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.866864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.866894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.867245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.867274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.867510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.867540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.867880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.867910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.868289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.868319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.868669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.868699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.869065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.869095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.869463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.869492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.869871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.869900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.870296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.870327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.870699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.123 [2024-12-06 18:42:15.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.123 qpair failed and we were unable to recover it. 00:30:21.123 [2024-12-06 18:42:15.871076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.871105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.871480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.871509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.871877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.871907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.872283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.872311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.872675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.872705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.873082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.873111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.873488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.873517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.873760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.873790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.874038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.874066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.874427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.874457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.874824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.874855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.875203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.875232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.875592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.875621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.876011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.876041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.876419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.876450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.876798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.876829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.877161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.877191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.877542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.877571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.877943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.877973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.878357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.878385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.878743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.878773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.879111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.879142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.879489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.879519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.879869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.879901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.880252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.880280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.880611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.880649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.880896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.880925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.881278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.881308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.881702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.882094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.882124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.882463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.882492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.882870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.882901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.883255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.883283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.884068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.884097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.884463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.884493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.884865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.884894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.885250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.124 [2024-12-06 18:42:15.885279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.124 qpair failed and we were unable to recover it. 00:30:21.124 [2024-12-06 18:42:15.885630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.885668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.885959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.885988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.886252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.886281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.886623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.886669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.887006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.887292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.887320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.887690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.887720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.888067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.888097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.888495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.888525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.888856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.888885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.889222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.889251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.889631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.889669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.889986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.890015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.890373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.890403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.890578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.890609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.890962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.891391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.891420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.891756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.891787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.892167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.892197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.892545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.892575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.892955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.892986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.893344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.893374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.893733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.893762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.894153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.894182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.894545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.894576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.894969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.894999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.895364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.895710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.895741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.896111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.896139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.896466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.896494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.125 [2024-12-06 18:42:15.896870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.125 [2024-12-06 18:42:15.896906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.125 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.897274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.897308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.897686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.897717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.899563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.899632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.899968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.900003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.900349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.900378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.900726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.900755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.901135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.901163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.901531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.413 [2024-12-06 18:42:15.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.413 [2024-12-06 18:42:15.901971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.413 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.902338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.902368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.902742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.902772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.903156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.903530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.903559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.903975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.904005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.904355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.904384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.904734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.904763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.905123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.905151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.905484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.905512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.905893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.906227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.906256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.906495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.906522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.906820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.906851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.907217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.907245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.907602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.907631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.907999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.908030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.908391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.908420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.908774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.908803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.909153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.909557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.909586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.909942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.909971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.910319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.910347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.910720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.910750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.911120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.911148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.911476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.911507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.913321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.913385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.913745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.913783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.914162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.914191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.914553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.914582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.914931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.914961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.915335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.915373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.915615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.915735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.916092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.916123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.916477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.916508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.916865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.916896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.917248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.917277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.917633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.414 [2024-12-06 18:42:15.917671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.414 qpair failed and we were unable to recover it. 00:30:21.414 [2024-12-06 18:42:15.918018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.918046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.918422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.918450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.918813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.918843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.919199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.919228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.919601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.919630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.919993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.920022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.920393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.920421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.920786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.920816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.921206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.921554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.921585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.921959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.921990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.922331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.922360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.922612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.922651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.922982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.923013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.923258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.923291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.923531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.923561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.923963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.923993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.924418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.924664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.924697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.925092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.925318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.925351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.925582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.925616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.925992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.926382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.926410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.926645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.926674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.926987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.927015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.927346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.927375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.927749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.927784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.928032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.928059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.928399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.928428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.928781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.928812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.929174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.929203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.929539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.929571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.929814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.929852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.930214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.930242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.930630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.930951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.930981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.931332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.415 [2024-12-06 18:42:15.931360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.415 qpair failed and we were unable to recover it. 00:30:21.415 [2024-12-06 18:42:15.931726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.931757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.932091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.932121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.932494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.932522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.932895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.932924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.933271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.933300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.933543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.933576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.933920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.933950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.934325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.934355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.934708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.934738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.935101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.935129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.935495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.935524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.935909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.935939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.936277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.936308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.936549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.936581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.936976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.937006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.937383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.937412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.937634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.937676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.937897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.937928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.938280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.938310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.938557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.938588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.938965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.938999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.939359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.939387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.939800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.939831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.940085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.940113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.940380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.940412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.940771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.940799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.941096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.941125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.941438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.941467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.941826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.941856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.942111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.942143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.942505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.942534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.942929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.942959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.943323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.943351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.943700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.943730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.944104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.944133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.944500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.944528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.944915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.944945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.945314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.416 [2024-12-06 18:42:15.945343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.416 qpair failed and we were unable to recover it. 00:30:21.416 [2024-12-06 18:42:15.945725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.945755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.946110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.946139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.946316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.946344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.946705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.946734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.947007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.947035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.947860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.947891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.948142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.948171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.948449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.948712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.948742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.949013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.949045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.949205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.949238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.949609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.949646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.950094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.950124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.950496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.950526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.950763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.950796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.951152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.951182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.951535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.951563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.951810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.951842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.952198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.952228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.952593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.952622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.953031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.953061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.953418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.953448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.953892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.953923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.954261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.954298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.954536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.954564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.954829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.954860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.955223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.955252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.955688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.955719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.956075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.956470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.956499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.956829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.956858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.957298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.957326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.957660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.957692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.958097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.958443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.958721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.958752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.959110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.417 [2024-12-06 18:42:15.959139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.417 qpair failed and we were unable to recover it. 00:30:21.417 [2024-12-06 18:42:15.959498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.959529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.959881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.959913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.960163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.960192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.960560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.960589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.960931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.960961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.961333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.961363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.961678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.961708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.962080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.962108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.962486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.962516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.962999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.963029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.963371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.963404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.963763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.964131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.964167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.964529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.964560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.964768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.964801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.965165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.965194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.965564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.965593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.965959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.965988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.966367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.966752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.966783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.967149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.967178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.967536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.967564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.967932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.967962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.968308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.968337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.968711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.968742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.969098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.969564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.970015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.970044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.970388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.970418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.970775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.970804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.971177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.971208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.971581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.971610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.418 [2024-12-06 18:42:15.971991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.418 [2024-12-06 18:42:15.972021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.418 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.972455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.972484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.972840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.972872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.973080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.973109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.973465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.973495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.973858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.973889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.974270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.974625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.974661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.974907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.974938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.975293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.975326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.975684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.975714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.976115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.976143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.976483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.976512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.976866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.976897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.977136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.977164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.977552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.977583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.977998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.978030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.978382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.978411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.978784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.978814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.979177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.979206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.979568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.979596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.979993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.980024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.980398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.980427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.980784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.980814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.981180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.981209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.981573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.981602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.981958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.981988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.982329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.982361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.982798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.983159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.983187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.983557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.983585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.983950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.983980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.984317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.984348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.984718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.984749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.985104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.985145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.985508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.985536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.985977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.986007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.986439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.986470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.419 [2024-12-06 18:42:15.986850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.419 [2024-12-06 18:42:15.986881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.419 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.987218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.987248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.987610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.987646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.988072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.988100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.988460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.988490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.988842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.988872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.989207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.989237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.989578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.989608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.990036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.990065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.990427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.990456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.990833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.991184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.991214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.991579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.991608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.992031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.992061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.992495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.992524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.992882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.992912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.993268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.993300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.993671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.993702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.994067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.994095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.994473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.994502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.994846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.995241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.995602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.995632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.996005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.996398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.996428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.996791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.996822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.997192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.997220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.997587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.997616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.997984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.998014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.998386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.998416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.998782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.998812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.999175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.999203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.999569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.999599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:15.999844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:15.999877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:16.000247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:16.000279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:16.000646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:16.000678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:16.001002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:16.001037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:16.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:16.001421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.420 qpair failed and we were unable to recover it. 00:30:21.420 [2024-12-06 18:42:16.001788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.420 [2024-12-06 18:42:16.001817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.002178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.002207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.002579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.002608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.002968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.002998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.003350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.003379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.003748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.003778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.004146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.004174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.004534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.004564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.004903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.004934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.005290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.005320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.005725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.005755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.006145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.006397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.006428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.006791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.006821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.007204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.007233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.007603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.007980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.008009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.008368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.008397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.008765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.008795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.009244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.009591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.009621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.009979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.010008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.010374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.010403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.010770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.010800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.011149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.011178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.011435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.011468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.011822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.011854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.012220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.012249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.012621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.012658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.013001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.013030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.013398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.013428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.013768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.013798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.014165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.014194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.014536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.014566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.014943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.014974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.015324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.015354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.015723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.015752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.016118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.016150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.421 [2024-12-06 18:42:16.016507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.421 [2024-12-06 18:42:16.016544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.421 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.016899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.016930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.017301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.017332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.017695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.017725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.018084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.018368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.018401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.018763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.018794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.019181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.019210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.019570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.019598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.019942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.020336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.020723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.020754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.020900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.020932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.021212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.021606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.021636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.022011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.022040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.022409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.022439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.022802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.022832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.023190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.023220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.023575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.023604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.023902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.023932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.024292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.024320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.024679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.024710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.025092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.025122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.025488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.025518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.025887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.025917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.026282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.026311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.026682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.026712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.027106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.027136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.027548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.027580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.027982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.028012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.028360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.028390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.028797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.028826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.029156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.029185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.422 [2024-12-06 18:42:16.029543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.422 [2024-12-06 18:42:16.029574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.422 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.029941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.029971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.030316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.030345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.030701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.030731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.031081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.031110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.031466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.031498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.031820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.031858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.032215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.032245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.032496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.032527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.032846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.032875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.033234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.033264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.033620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.033657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.034020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.034051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.034338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.034368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.034727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.034757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.035125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.035154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.035531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.035560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.035889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.035920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.036287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.036318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.036677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.036709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.037083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.037113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.037471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.037500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.037909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.037939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.038298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.038329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.038690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.038721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.039095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.039123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.039493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.039522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.039889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.039919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.040285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.040313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.040674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.040704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.041079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.041117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.041453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.041481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.041853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.041882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.042255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.042283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.042522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.042555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.042907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.042938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.043313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.043344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.043706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.043736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.423 [2024-12-06 18:42:16.044084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.423 [2024-12-06 18:42:16.044112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.423 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.044474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.044503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.044872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.044901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.045279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.045309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.045665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.045695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.046062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.046091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.046454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.046483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.046849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.046878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.047239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.047275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.047632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.047686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.048034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.048064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.048420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.048448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.048813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.048845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.049211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.049240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.049589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.049620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.050030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.050383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.050413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.050778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.050807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.051176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.051206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.051560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.051588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.051935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.051966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.052327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.052355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.052698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.052728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.053082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.053110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.053461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.053490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.053863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.053893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.054258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.054540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.054569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.054817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.054849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.055221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.055250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.055610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.055646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.056007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.056035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.056409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.056805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.056835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.057200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.057231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.057589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.057619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.057992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.058352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.058382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.424 [2024-12-06 18:42:16.058629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.424 [2024-12-06 18:42:16.058670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.424 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.059019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.059048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.059426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.059454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.059820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.059850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.060219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.060248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.060483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.060514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.060867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.060899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.061149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.061178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.061458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.061488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.061821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.061851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.062213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.062254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.062495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.062527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.062893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.062922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.063311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.063343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.063721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.063752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.064131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.064159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.064519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.064547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.064918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.064948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.065312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.065342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.065703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.065734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.066082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.066113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.066565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.066602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.066968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.067251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.067285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.068372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.068417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.068797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.068828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.069095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.069376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.069408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.069811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.069844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.070210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.070239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.070609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.070648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.070999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.071030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.071439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.071468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.071832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.071862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.072231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.072262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.072619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.072657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.073015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.073044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.073429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.073460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.425 [2024-12-06 18:42:16.073834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.425 [2024-12-06 18:42:16.073864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.425 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.074107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.074139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.074492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.074524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.074872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.074907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.075258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.075286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.075536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.075569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.075908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.076305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.076336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.076690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.077110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.077140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.077398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.077427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.077844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.078195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.078605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.078960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.078990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.079227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.079257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.079658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.079690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.080049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.080079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.080259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.080290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.080644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.080675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.080985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.081018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.081261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.081678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.081710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.082116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.082360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.082390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.082666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.082697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.083050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.083080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.083429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.083459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.083723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.083753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.084086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.084115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.084484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.084878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.084910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.085265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.085295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.085546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.085579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.085931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.085963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.086187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.086216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.086605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.086635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.087023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.087052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.426 [2024-12-06 18:42:16.087310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.426 [2024-12-06 18:42:16.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.426 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.087749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.087781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.088152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.088182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.088443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.088478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.088938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.088969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.089326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.089355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.089751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.090139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.090168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.090536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.090565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.090996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.091027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.091397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.091427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.091768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.091800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.092047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.092469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.092498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.092869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.092906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.093266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.093295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.093668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.093700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.094086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.094116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.094486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.094515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.094922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.094951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.095308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.095345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.095710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.096071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.096101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.096493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.096523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.096871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.096902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.097243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.097272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.097599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.097626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.098035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.098068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.098406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.098437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.098686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.098716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.099048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.099076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.099493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.099522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.099820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.099849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.100186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.100216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.100585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.427 [2024-12-06 18:42:16.100617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.427 qpair failed and we were unable to recover it. 00:30:21.427 [2024-12-06 18:42:16.100994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.101024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.101341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.101371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.101694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.101722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.102075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.102105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.102460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.102493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.102950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.102981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.103351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.103380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.103715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.103745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.104102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.104229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.104623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.105180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.105684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.105723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.106153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.106185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.106553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.106583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.106745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.106783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.107123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.107153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.107494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.107522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.107765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.107796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.108146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.108176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.108539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.108580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.108940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.108970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.109331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.109360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.109602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.109633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.110021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.110051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.110413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.110442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.110893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.110924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.111255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.111284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.111515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.111550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.111979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.112378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.112408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.112773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.112804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.113190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.113219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.113583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.113614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.114043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.114073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.114314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.114342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.114709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.114740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.115188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.115218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.115575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.428 [2024-12-06 18:42:16.115604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.428 qpair failed and we were unable to recover it. 00:30:21.428 [2024-12-06 18:42:16.115967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.115997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.116348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.116380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.116762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.116792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.117147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.117177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.117559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.117588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.117972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.118003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.118367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.118396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.118756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.118787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.119110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.119140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.119551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.119583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.119966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.119999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.120443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.120475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.120847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.121209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.121238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.121607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.121636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.122038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.122287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.122316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.122683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.122713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.123098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.123129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.123478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.123507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.123899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.123930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.124288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.124325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.124574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.124603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.124994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.125025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.125474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.125503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.125861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.125891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.126249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.126278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.126647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.126677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.127026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.127055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.127429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.127460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.127942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.127973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.128322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.128351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.128596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.128625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.129098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.129128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.129482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.129512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.129762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.129792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.130170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.130200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.130562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.429 [2024-12-06 18:42:16.130592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.429 qpair failed and we were unable to recover it. 00:30:21.429 [2024-12-06 18:42:16.130935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.130966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.131393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.131423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.131777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.131808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.132174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.132203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.132566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.132595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.132977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.133008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.133370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.133400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.133774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.133805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.134209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.134238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.134596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.134627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.135030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.135393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.135423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.135691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.135720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.136097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.136127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.136494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.136757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.136789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.137177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.137206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.137587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.137617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.138076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.138441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.138814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.138844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.139212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.139243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.139627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.139668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.140026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.140056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.140429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.140459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.140675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.140709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.141101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.141130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.141488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.141517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.141898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.141927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.142343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.142372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.142671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.142701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.143101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.143130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.143513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.143919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.143949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.144300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.144330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.144673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.144703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.145151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.145180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.145540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.430 [2024-12-06 18:42:16.145570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.430 qpair failed and we were unable to recover it. 00:30:21.430 [2024-12-06 18:42:16.145947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.145978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.146345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.146374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.146731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.146761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.147115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.147146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.147484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.147512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.147763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.147793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.148141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.148172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.148538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.148567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.148941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.148972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.149427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.149457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.149809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.149839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.150200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.150230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.150596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.150630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.151054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.151084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.151463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.151493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.151872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.151903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.152299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.152663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.152694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.153053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.153084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.153345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.153374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.153799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.153829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.154193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.154222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.154579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.154609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.154986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.155016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.155360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.155389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.155750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.155780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.156036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.156065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.156496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.156525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.156973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.157005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.157373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.157404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.157756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.157786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.158115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.158144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.158509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.158539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.158911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.431 [2024-12-06 18:42:16.159274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.431 [2024-12-06 18:42:16.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.431 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.159667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.159697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.159979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.160007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.160382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.160412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.160768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.160801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.161170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.161200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.161649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.161679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.162075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.162104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.162475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.162505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.162757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.162787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.163139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.163169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.163500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.163530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.163883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.163915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.164274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.164304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.165105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.165135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.165500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.165530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.165871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.165902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.166259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.166294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.166671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.166701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.167062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.167091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.167355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.167384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.167778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.167808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.168205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.168234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.168686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.169048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.169079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.169434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.169462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.169841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.169871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.170220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.170250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.170476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.170504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.170921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.170951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.171203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.171233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.171619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.171656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.171957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.171986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.172324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.172664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.173042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.173071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.173415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.173444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.432 [2024-12-06 18:42:16.173821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.432 [2024-12-06 18:42:16.173852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.432 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.174216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.174246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.174606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.174634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.175004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.175033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.175424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.175787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.175816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.433 [2024-12-06 18:42:16.176187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.433 [2024-12-06 18:42:16.176216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.433 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.176576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.176608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.176950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.176982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.177339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.177367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.177732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.177762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.178175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.178204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.178521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.178551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.178930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.178960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.179326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.179355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.179714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.179743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.180117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.768 [2024-12-06 18:42:16.180146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.768 qpair failed and we were unable to recover it. 00:30:21.768 [2024-12-06 18:42:16.180518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.180546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.180909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.180939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.181317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.181345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.181687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.181724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.182078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.182108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.182474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.182503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.182870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.182899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.183250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.183280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.183632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.183670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.184023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.184053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.184414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.184443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.184784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.184814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.185066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.185095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.185467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.185867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.185898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.186229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.186257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.186602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.186631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.186873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.186902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.187269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.187297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.187662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.187693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.188046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.188075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.188461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.188490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.188831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.188860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.189225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.189583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.189612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.189866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.189899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.190271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.190300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.190661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.190692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.191079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.191107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.191441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.191470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.191936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.191966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.192304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.192334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.192695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.193065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.193095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.193472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.193500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.193900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.193930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.194288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.194317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.194682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.194713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.194971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.195004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.195366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.195395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.195756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.195787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.196137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.196166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.196528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.196557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.196940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.196977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.197335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.197364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.197757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.197787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.198145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.198174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.198530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.198558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.198911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.198941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.199299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.199328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.199694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.199733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.200070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.200099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.200459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.200488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.200841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.200870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.201235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.201264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.201621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.201658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.202019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.769 [2024-12-06 18:42:16.202047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.769 qpair failed and we were unable to recover it. 00:30:21.769 [2024-12-06 18:42:16.202411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.202440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.202843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.202873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.203240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.203270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.203623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.203973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.204001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.204368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.204397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.204750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.205148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.205176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.205540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.205570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.205920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.205951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.206287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.206317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.206673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.206703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.207058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.207087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.207451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.207481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.207840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.207871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.208224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.208252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.208615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.208990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.209020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.209375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.209403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.209777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.209807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.210156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.210186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.210518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.210546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.210888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.210919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.211279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.211308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.211682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.211712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.212080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.212108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.212354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.212389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.212720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.212749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.213098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.213127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.213495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.213525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.213932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.214326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.214355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.214725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.214755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.215127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.215157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.215525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.215553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.215887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.215917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.216276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.216304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.216670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.216699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.217098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.217126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.217482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.217511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.217862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.217893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.218255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.218284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.218677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.218708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.219111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.219142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.219507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.219537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.219880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.219910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.220270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.220299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.220645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.220674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.221040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.221069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.221430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.221459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.221817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.221847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.222199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.222228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.222575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.222605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.222983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.223012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.223249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.223278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.223633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.770 [2024-12-06 18:42:16.223671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.770 qpair failed and we were unable to recover it. 00:30:21.770 [2024-12-06 18:42:16.224071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.224100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.224466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.224495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.224859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.224889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.225178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.225206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.225574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.225603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.225964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.225994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.226363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.226392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.226685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.226716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.227155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.227184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.227547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.227575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.227926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.227962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.228320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.228349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.228725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.228754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.229106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.229137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.229477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.229506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.229766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.229796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.230179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.230208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.230567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.230596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.231039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.231070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.231447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.231475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.231746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.232175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.232204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.232560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.232590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.232841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.232874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.233257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.233287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.233654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.233683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.234027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.234056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.234424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.234454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.234708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.234737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.235088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.235118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.235480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.235509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.235859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.235890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.236261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.236290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.236658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.236688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.237047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.237075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.237435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.237464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.237716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.237747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.238110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.238140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.238448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.238478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.238829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.238859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.239232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.239261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.239628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.239668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.240029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.240058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.240423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.240452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.240690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.240720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.241100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.241129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.241490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.241520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.241819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.241849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.242218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.242247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.242613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.771 [2024-12-06 18:42:16.242659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.771 qpair failed and we were unable to recover it. 00:30:21.771 [2024-12-06 18:42:16.243016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.243051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.243424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.243453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.243702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.243732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.244093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.244122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.244482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.244511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.244892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.244922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.245292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.245321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.245676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.245706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.246140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.246168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.246522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.246550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.246951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.246980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.247333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.247361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.247743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.247773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.248191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.248220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.248614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.248652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.249013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.249041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.249393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.249422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.249732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.249761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.250180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.250209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.250542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.250571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.250942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.250972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.251236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.251265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.251629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.251668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.251998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.252027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.252399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.252428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.252797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.252828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.253175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.253203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.253484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.253823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.253854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.254295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.254324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.254668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.255067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.255096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.255531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.255559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.255889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.255919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.256250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.256653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.256683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.257034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.257064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.257500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.257529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.257776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.257806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.258196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.258226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.258664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.259054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.259082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.259443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.259472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.259831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.259861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.260215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.260243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.260623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.260658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.261014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.261043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.261486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.261514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.261883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.261914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.262291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.262320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.262756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.262785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.263149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.263178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.263542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.263571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.263902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.263932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.264297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.264326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.264698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.264727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.265089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.265117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.772 qpair failed and we were unable to recover it. 00:30:21.772 [2024-12-06 18:42:16.265487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.772 [2024-12-06 18:42:16.265515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.265867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.265898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.266252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.266281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.266527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.266558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.266830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.266861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.267209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.267237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.267482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.267511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.267894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.267924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.268235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.268265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.268625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.268664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.268942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.268972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.269330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.269358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.269753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.270121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.270150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.270512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.270540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.270814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.270843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.271231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.271260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.271604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.271633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.271964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.271993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.272393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.272423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.272670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.272700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.273070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.273099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.273471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.273499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.273882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.273919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.274272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.274301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.274662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.274692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.274960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.274989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.275294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.275323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.275683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.275712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.276143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.276172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.276523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.276552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.276892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.276923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.277286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.277680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.277711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.278074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.278102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.278459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.278488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.278757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.278787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.279173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.279204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.279571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.279600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.279959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.279990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.280328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.280358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.280714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.280746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.281107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.281136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.281506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.281536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.281924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.281955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.282390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.282420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.282774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.282805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.283201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.283230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.283558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.283587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.283835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.283864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.284246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.284275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.284707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.284737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.285119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.285148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.285511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.285888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.285918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.286355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.286384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.286715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.286746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.773 [2024-12-06 18:42:16.287120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.773 [2024-12-06 18:42:16.287149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.773 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.287512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.287541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.287901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.287930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.288279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.288552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.288581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.288938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.288968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.289400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.289435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.289771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.289800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.290035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.290067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.290446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.290844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.290873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.291259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.291288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.291635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.291674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.292022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.292052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.292399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.292429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.292787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.292816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.293173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.293201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.293568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.293597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.293991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.294020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.294384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.294412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.294773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.294803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.295170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.295199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.295448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.295479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.295777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.295807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.296144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.296173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.296563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.296592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.296836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.296865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.297088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.297119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.297373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.297403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.297758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.297788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.298156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.298185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.298560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.298590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.298951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.298984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.299280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.299310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.299676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.299706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.300062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.300091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.300484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.300512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.300743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.301138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.301167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.301531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.301560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.301931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.301961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.302290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.302319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.302681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.302712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.303097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.303125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.303490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.303888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.303919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.304280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.304315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.304676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.304706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.305101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.305130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.305486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.305514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.305745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.305774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.306138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.306166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.306528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.306556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.774 qpair failed and we were unable to recover it. 00:30:21.774 [2024-12-06 18:42:16.306929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.774 [2024-12-06 18:42:16.306959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.307301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.307330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.307694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.307725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.308094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.308123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.308307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.308335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.308694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.308724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.308983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.309011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.309441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.309471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.309820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.309850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.310109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.310138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.310487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.310515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.310908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.310938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.311287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.311315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.311791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.311821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.312181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.312212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.312550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.312578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.312928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.312959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.313321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.313350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.313725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.313755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.314022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.314054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.314412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.314441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.314728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.315146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.315175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.315560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.315589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.316027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.316057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.316423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.316452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.316805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.316835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.317202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.317232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.317588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.317618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.317961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.317991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.318351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.318381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.318755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.318784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.319151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.319181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.319550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.319585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.319841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.319871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.320021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.320053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.320415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.320444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.320696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.320727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.320981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.321009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.321382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.321410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.321800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.321830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.322188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.322218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.322584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.322612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.322971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.323000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.323374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.323403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.323761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.323790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.324163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.324192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.324441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.324470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.324710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.324744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.325107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.325137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.325500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.325530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.325860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.325890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.326250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.326279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.326517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.326546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.326952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.326984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.327343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.327373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.327735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.327765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.775 [2024-12-06 18:42:16.328116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.775 [2024-12-06 18:42:16.328147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.775 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.328517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.328546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.328886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.328918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.329254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.329285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.329657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.329936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.329964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.330328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.330358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.330713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.330743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.330983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.331015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.331384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.331413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.331782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.331813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.332090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.332120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.332479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.332507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.332781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.332812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.333171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.333199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.333557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.333586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.334020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.334051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.334404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.334434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.334800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.334831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.335203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.335233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.335602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.335632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.336008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.336037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.336407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.336435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.336685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.336716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.337106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.337489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.337520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.337881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.337910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.338166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.338195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.338486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.338516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.338856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.338886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.339120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.339150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.339530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.339559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.339777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.339807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.340060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.340089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.340445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.340474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.340729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.340758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.341114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.341144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.341561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.341590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.341938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.341968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.342317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.342346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.342664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.342694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.343071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.343100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.343471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.343501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.343866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.343903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.344284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.344313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.344551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.344579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.344956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.344987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.345378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.345406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.345777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.345807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.346176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.346205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.346552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.346580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.346956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.346986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.347344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.347374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.347729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.347758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.348132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.348161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.348521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.348550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.348820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.348849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.349219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.776 [2024-12-06 18:42:16.349249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.776 qpair failed and we were unable to recover it. 00:30:21.776 [2024-12-06 18:42:16.349608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.349648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.350009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.350037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.350288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.350317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.350683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.350713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.350981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.351010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.351397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.351426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.351805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.351836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.352197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.352227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.352566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.352596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.352990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.353358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.353387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.353757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.353787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.354155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.354185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.354545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.354574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.354930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.355322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.355352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.355606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.355636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.356000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.356029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.356380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.356411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.356795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.356826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.357161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.357191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.357558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.357588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.357930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.357959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.358306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.358336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.358726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.358756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.358994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.359030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.359405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.359435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.359786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.360183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.360212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.360559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.360588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.360962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.360992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.361342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.361370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.361724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.361754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.362133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.362161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.362511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.362924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.362955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.363358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.363386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.363795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.363825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.364176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.364205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.364454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.364486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.364826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.364857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.365094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.365124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.365476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.365506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.365870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.365900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.366241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.366271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.366596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.366625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.367006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.367036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.367414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.367442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.367790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.367821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.368203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.368233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.368594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.368624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.369000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.369029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.777 [2024-12-06 18:42:16.369413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.777 qpair failed and we were unable to recover it. 00:30:21.777 [2024-12-06 18:42:16.369789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.369819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.370203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.370562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.370591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.370963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.370992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.371344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.371374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.371755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.371785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.372104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.372135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.372541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.372569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.372901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.372931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.373179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.373208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.373545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.373575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.373920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.373950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.374313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.374348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.374706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.374735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.375093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.375122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.375491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.375520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.375895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.375925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.376261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.376290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.376658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.376688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.377057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.377088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.377435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.377465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.377818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.377850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.378185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.378214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.378598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.378627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.378983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.379012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.379355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.379384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.379734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.379765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.380129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.380158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.380508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.380536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.380918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.380948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.381697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.381727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.382077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.382105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.382451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.382480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.382696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.382728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.383111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.383139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.383493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.383522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.383891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.383923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.384287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.384316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.384658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.384688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.385079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.385432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.385461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.385826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.385856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.386224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.386253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.386619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.386655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.386909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.387192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.387222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.387584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.387612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.387981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.388011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.388367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.388397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.388737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.388767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.389162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.389191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.389549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.389584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.389941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.389970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.390316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.390345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.390719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.390749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.391065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.391094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.778 [2024-12-06 18:42:16.391455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.778 [2024-12-06 18:42:16.391484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.778 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.391849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.391879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.392249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.392278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.392536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.392565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.392918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.392948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.393341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.393716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.393746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.394115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.394143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.394501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.394530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.394911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.394942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.395305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.395334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.395697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.395727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.396093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.396122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.396383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.396415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.396768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.396799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.397135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.397165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.397555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.397584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.397942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.397972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.398329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.398358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.398724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.398753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.399101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.399130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.399492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.399520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.399861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.399892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.400249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.400277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.400647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.401046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.401074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.401428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.401457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.401822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.401853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.402197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.402225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.402609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.403012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.403041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.403399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.403427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.403769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.403799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.404166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.404195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.404443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.404472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.404819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.404855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.405196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.405226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.405579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.405608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.405983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.406013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.406366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.406396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.406758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.406788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.407166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.407548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.407577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.407946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.407976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.408321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.408350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.408708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.408738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.409132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.409161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.409523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.409551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.409905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.409937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.410293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.410322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.410682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.410712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.411131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.411161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.411526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.411557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.411920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.411951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.412322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.412353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.412719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.412750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.413134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.413164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.413544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.779 [2024-12-06 18:42:16.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.779 qpair failed and we were unable to recover it. 00:30:21.779 [2024-12-06 18:42:16.413946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.413977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.414343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.414372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.414797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.414827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.415189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.415218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.415586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.415615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.415960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.415989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.416359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.416389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.416744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.416774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.417115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.417145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.417480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.417508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.417864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.417894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.418250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.418280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.418634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.418675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.419027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.419058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.419394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.419423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.419781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.419813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.420145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.420173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.420536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.420573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.420965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.420996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.421359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.421388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.421759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.421788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.422155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.422183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.422536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.422565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.422934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.422964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.423331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.423360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.423711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.423741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.424165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.424194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.424556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.424586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.424940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.424969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.425339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.425368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.425738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.425776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.426142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.426171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.426530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.426558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.426926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.426956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.427297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.427326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.427690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.427722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.428083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.428111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.428481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.428510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.428886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.428917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.429256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.429284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.429646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.429675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.430035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.430064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.430428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.430456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.430800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.431202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.431231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.431597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.431626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.431996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.432026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.432388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.432417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.432787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.432818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.433164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.433194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.433557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.433586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.433962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.433992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.434341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.780 [2024-12-06 18:42:16.434371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.780 qpair failed and we were unable to recover it. 00:30:21.780 [2024-12-06 18:42:16.434633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.434675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.435055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.435084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.435446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.435474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.435864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.435894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.436152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.436190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.436550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.436579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.436947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.436977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.437336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.437364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.437737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.437768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.438104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.438133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.438485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.438513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.438895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.438924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.439281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.439310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.439679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.439711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.439960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.439989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.440345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.440373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.440740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.440770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.441126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.441154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.441514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.441544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.441878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.441910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.442257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.442286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.442648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.442678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.443049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.443078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.443440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.443469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.443824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.443853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.444213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.444242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.444603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.444632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.444991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.445020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.445402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.445431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.445800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.445832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.446207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.446236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.446596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.446625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.446968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.446997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.447355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.447384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.447750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.447780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.448140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.448168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.448515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.448543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.448895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.448926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.449178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.449206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.449559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.449588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.449944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.449974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.450339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.450368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.450705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.450736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.451084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.451114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.451478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.451512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.451758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.451791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.452160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.452190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.452555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.452584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.452948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.452977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.453334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.453729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.453758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.454127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.781 [2024-12-06 18:42:16.454155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.781 qpair failed and we were unable to recover it. 00:30:21.781 [2024-12-06 18:42:16.454389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.454418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.454756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.454787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.455161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.455189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.455557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.456026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.456057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.456410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.456816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.456846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.457208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.457237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.457603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.457631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.458015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.458044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.458404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.458432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.458783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.458812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.459185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.459213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.459575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.459604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.459981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.460010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.460362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.460392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.460762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.460791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.461139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.461169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.461533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.461562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.461905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.461936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.462287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.462316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.462674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.462705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.463101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.463129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.463499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.463528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.463887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.463918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.464247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.464276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.464620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.464660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.465011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.465041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.465399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.465427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.465793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.465822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.466183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.466212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.466577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.466607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.466963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.466999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.467346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.467375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.467741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.467771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.468163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.468192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.468557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.468587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.468929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.468958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.469329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.469358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.469723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.469753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.470119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.470149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.470519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.470547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.470886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.470917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.471277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.471306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.471676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.471708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.472061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.472091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.472462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.472871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.472902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.473294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.473692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.474011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.474039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.474392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.474420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.474777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.474808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.475202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.475563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.475849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.475880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.476235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.782 [2024-12-06 18:42:16.476265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.782 qpair failed and we were unable to recover it. 00:30:21.782 [2024-12-06 18:42:16.476627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.476665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.477008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.477045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.477408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.477438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.477801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.477831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.478195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.478585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.478615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.478961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.478990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.479353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.479382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.479742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.479772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.480084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.480114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.480483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.480511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.480764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.480793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.481153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.481182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.481522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.481551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.481914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.481943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.482303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.482338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.482698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.482729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.483087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.483116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.483479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.483508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.483888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.483918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.484336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.484365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.484696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.484727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.485084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.485113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.485468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.485496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.485879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.485910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.486272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.486301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.486684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.486713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.487164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.487193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.487551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.487579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.487944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.487974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.488341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.488371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.488668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.488698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.489052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.489080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.489440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.489868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.489898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.490259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.490287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.490658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.490688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.491041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.491070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.491435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.491463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.491829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.491859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.492122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.492151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.492501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.492529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.492872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.493256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.493286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.493647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.493676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.494029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.494057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.494399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.494428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.494774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.494804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.495152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.495181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.495548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.495577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.495947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.495978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.496331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.496361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.783 [2024-12-06 18:42:16.496735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.783 [2024-12-06 18:42:16.496765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.783 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.497164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.497536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.497564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.497931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.497966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.498380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.498409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.498772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.498810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.499191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.499220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.499514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.499542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.499887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.499917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.500292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.500322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.500686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.500716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.501093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.501121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.501478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.501506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.501864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.501894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.502263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.502292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.502656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.502687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.503069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.503098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.503457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.503485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.503757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.503786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.504141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.504170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.504528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.504557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.504799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.504832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.505196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.505226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.505586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.505615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.506015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.506045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.506301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.506330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.506684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.506713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.507111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.507140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.507393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.507421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.507829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.507858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.508208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.508246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.508609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.508648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.508875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.508903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.509267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.509296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.509668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.509698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.510053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.510081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.510519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.510548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.510896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.510926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.511301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.511330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.511589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.511617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.511979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.512008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.512371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.512399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.512759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.512788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.513126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.513161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.513532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.513562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.513937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.514340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.514369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.514711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.784 [2024-12-06 18:42:16.514759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.784 qpair failed and we were unable to recover it. 00:30:21.784 [2024-12-06 18:42:16.515114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.515143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.515404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.515433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.515870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.515901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.516314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.516343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.516697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.516735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.517112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.517141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.517399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.517427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.517781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.517811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.518149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.518179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.518517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.518546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.518884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.518914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.519278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.519306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.519737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.519767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.520184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.520213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.520550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.520579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.521185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.521215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.521575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.521958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.521989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.522378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.522407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.522806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.522836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.523188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.523218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.523579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.523610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.523986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.524015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.524372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.524401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.524762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.524791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.525217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.525245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.525598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.525627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.526068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.526098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.526457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.526485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.526751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.526780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.527148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.527177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.527534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.527563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.527929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.527957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.528337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.528365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.528706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.528736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.529123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.529152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.529502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.529531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.529786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.529819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.530181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.530210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.530567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.530596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.530961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.530991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.531247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.531276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.531616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.531668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.532004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.532033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.532435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.532464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.532837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.532869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.533211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.533239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.533671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.533701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.534072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.534101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.534480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.534508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.534886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.534916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.535274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.535303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.535620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.535656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.536091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.536119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:21.785 [2024-12-06 18:42:16.536469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.785 [2024-12-06 18:42:16.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:21.785 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.536877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.536910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.537180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.537211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.537574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.537604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.537943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.537974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.538354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.538759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.539131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.539168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.539424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.539453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.539782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.539814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.540173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.540201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.540544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.540572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.540906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.540938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.541294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.541323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.541685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.541716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.542108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.542136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.542372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.542401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.542769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.542798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.543138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.543169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.543532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.543561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.543935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.543966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.544331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.544361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.544726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.061 [2024-12-06 18:42:16.545116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.061 [2024-12-06 18:42:16.545144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.061 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.545571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.545600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.545958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.545988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.546327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.546356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.546720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.547098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.547126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.547483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.547512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.547884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.547914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.548281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.548309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.548673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.548703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.549054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.549083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.549453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.549482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.549890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.549920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.550258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.550288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.550656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.550685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.551050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.551079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.551442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.551472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.551812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.551841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.552216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.552245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.552596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.552625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.553063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.553419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.553449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.553808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.553838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.554198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.554227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.554594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.554636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.555100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.555130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.555462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.555491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.555625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.555667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.556105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.556135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.556500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.556529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.556898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.556927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.557316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.557345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.557709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.557739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.558111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.558139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.558495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.558526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.558899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.558928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.559343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.559372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.559735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.559765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.560131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.560161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.560533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.560561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.560925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.560954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.561330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.561360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.561697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.561728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.562074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.562103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.562469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.562499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.562864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.562894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.563330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.563360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.563716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.563746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.564098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.564137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.564477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.564507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.564943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.564973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.565313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.565342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.565706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.062 [2024-12-06 18:42:16.565737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.062 qpair failed and we were unable to recover it. 00:30:22.062 [2024-12-06 18:42:16.565973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.566005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.566304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.566333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.566578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.566611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.566866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.566896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.567274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.567303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.567674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.567706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.568079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.568108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.568316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.568346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.568719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.568751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.569098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.569127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.569539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.569569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.569942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.570301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.570331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.570779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.570812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.571161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.571190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.571535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.571564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.571946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.571977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.572335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.572365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.572716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.572746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.573086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.573115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.573459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.573488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.573739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.573772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.574421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.574450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.574683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.574717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.575117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.575479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.575508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.575873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.575905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.576268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.576298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.576659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.576690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.577020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.577050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.577420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.577450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.577812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.577844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.578280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.578309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.578630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.578667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.579037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.579067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.579429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.579458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.579697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.579730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.580109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.580138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.580562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.580593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.580943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.580973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.581381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.581410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.581823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.581854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.582112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.582471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.582501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.582862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.582894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.583244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.583273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.583656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.583687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.583979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.584008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.584366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.584395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.584649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.584680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.585057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.585094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.585434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.585463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.063 [2024-12-06 18:42:16.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.063 [2024-12-06 18:42:16.585847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.063 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.586216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.586245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.586613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.586650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.587008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.587037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.587410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.587440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.587819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.587849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.588200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.588230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.588571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.588600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.588990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.589019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.589387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.589415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.589768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.589798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.590178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.590207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.590610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.590646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.591016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.591046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.591296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.591324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.591498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.591528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.591914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.591944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.592300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.592331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.592699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.592729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.593157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.593187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.593531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.593559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.593935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.593966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.594318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.594357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.594705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.594736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.595120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.595148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.595525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.595555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.595892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.595929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.596164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.596193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.596460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.596490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.596736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.596770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.597126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.597155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.597539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.597568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.597929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.597960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.598338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.598366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.598728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.598758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.599151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.599180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.599615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.599653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.600089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.600119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.600454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.600489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.600865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.600895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.601229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.601259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.601594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.601623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.601983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.602016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.602400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.602430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.602812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.603089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.603513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.603544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.603886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.064 [2024-12-06 18:42:16.604314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.064 [2024-12-06 18:42:16.604345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.064 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.604715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.604746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.605132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.605161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.605380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.605409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.605776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.605808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.606169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.606197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.606553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.606583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.606950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.606981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.607346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.607375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.607598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.607627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.607905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.607934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.608286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.608315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.608667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.608698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.608933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.608964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.609314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.609343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.609715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.609744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.610008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.610037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.610290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.610320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.610694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.610723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.610964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.610994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.611368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.611397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.611681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.611710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.612061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.612090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.612330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.612359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.612687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.612717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.613077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.613106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.613472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.613501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.613871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.613901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.614256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.614285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.614659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.614688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.615054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.615088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.615451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.615480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.615739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.615772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.616140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.616169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.616534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.616563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.616931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.616962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.617328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.617357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.617714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.618134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.618505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.618535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.618925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.618954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.619324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.619354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.619605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.619634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.619994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.620024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.620378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.620407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.620657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.620689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.621044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.621073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.621433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.621461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.621816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.621847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.622203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.622232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.622597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.622626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.623012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.623041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.623411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.623440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.623710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.623739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.624131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.065 [2024-12-06 18:42:16.624159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.065 qpair failed and we were unable to recover it. 00:30:22.065 [2024-12-06 18:42:16.624536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.624564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.624944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.624974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.625326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.625357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.625746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.626125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.626153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.626531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.626560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.626936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.626966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.627366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.627724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.627755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.628134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.628163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.628521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.628551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.628917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.628946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.629321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.629349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.629708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.629738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.630106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.630135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.630353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.630398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.630631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.630668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.631023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.631052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.631420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.631450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.631813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.631843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.632191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.632221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.632527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.632556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.632896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.632927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.633311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.633340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.633707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.633737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.633982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.634010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.634400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.634767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.634797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.635152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.635181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.635541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.635571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.635909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.635938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.636309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.636339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.636705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.636735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.637089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.637119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.637492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.637848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.637878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.638244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.638273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.638543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.638572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.638949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.638978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.639323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.639352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.639732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.639761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.640130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.640159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.640528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.640557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.640929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.640958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.641304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.641333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.641578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.641609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.642010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.642039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.642400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.642429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.642787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.642817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.643181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.643209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.643583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.643611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.643986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.644016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.644382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.066 [2024-12-06 18:42:16.644412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.066 qpair failed and we were unable to recover it. 00:30:22.066 [2024-12-06 18:42:16.644775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.644805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.645175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.645203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.645564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.645598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.645991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.646356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.646744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.646775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.647138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.647168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.647533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.647561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.647942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.647972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.648342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.648372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.648730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.648762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.649162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.649518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.649547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.649914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.649943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.650319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.650348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.650715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.650746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.651108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.651138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.651488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.651518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.651784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.651815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.652141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.652169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.652522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.652552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.652888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.652919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.653290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.653321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.653670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.653699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.654058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.654087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.654441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.654470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.654835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.654866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.655234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.655263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.655632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.655676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.656047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.656077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.656361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.656389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.656749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.656780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.657142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.657171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.657511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.657540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.657900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.657931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.658289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.658317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.658566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.658599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.658996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.659027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.659399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.659428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.659793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.659823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.660170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.660199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.660562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.660592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.660935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.660971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.661322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.661352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.661679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.662072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.662102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.662470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.662499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.662748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.662777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.663133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.663162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.663527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.663556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.663810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.663839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.664262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.664291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.664662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.664692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.067 qpair failed and we were unable to recover it. 00:30:22.067 [2024-12-06 18:42:16.665050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.067 [2024-12-06 18:42:16.665079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.665421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.665450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.665809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.665838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.666198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.666227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.666589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.666618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.666872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.666901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.667250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.667280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.667671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.667702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.668034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.668064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.668443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.668473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.668840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.668871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.669226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.669255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.669613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.669651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.670006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.670365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.670393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.670765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.670794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.671166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.671196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.671533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.671563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.671925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.671955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.672316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.672345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.672710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.672740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.673152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.673517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.673546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.673802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.673831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.674172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.674201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.674580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.674610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.674975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.675005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.675393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.675757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.675786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.676189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.676546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.676574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.677021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.677052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.677294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.677322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.677541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.677571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.677818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.677851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.678207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.678614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.678885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.678918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.679267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.679295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.679673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.679706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.680041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.680070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.680437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.680466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.680814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.680844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.681246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.681275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.681537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.681566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.681925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.681955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.682311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.682340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.682597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.682625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.683026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.683055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.683435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.683463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.683817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.683846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.068 [2024-12-06 18:42:16.684214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.068 [2024-12-06 18:42:16.684243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.068 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.684603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.684633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.684991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.685021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.685379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.685787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.685818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.686155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.686535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.686565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.686933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.686963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.687352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.687708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.687737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.688075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.688104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.688464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.688495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.688867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.688897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.689257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.689285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.689652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.689682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.690043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.690072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.690431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.690461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.690813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.690844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.691076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.691469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.691498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.691847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.691878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.692247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.692276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.692696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.692726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.693136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.693166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.693538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.693565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.693936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.693966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.694329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.694358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.694716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.694746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.695178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.695207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.695537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.695567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.695900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.695929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.696287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.696316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.696675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.696705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.697061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.697090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.697346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.697375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.697783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.697812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.698178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.698207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.698564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.698593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.698964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.698994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.699339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.699369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.699742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.699772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.700146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.700175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.700542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.700570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.700938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.700968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.701407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.701436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.701804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.701835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.702206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.702235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.702600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.702628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.702997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.703026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.703389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.703418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.703782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.703812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.704164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.704193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.704554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.704584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.704939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.704969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.705329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.069 [2024-12-06 18:42:16.705359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.069 qpair failed and we were unable to recover it. 00:30:22.069 [2024-12-06 18:42:16.705710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.705740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.706103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.706498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.706527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.706871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.706913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.707273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.707302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.707666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.707697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.708049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.708077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.708439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.708468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.708723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.708756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.709111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.709140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.709376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.709407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.709774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.709803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.710171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.710200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.710572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.710600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.710978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.711008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.711381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.711680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.711711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.712091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.712121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.712481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.712511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.712966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.712997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.713286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.713315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.713658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.713688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.714052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.714081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.714319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.714350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.714725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.714755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.715120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.715149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.715501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.715529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.715869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.715900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.716296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.716662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.717051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.717080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.717422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.717451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.717821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.717852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.718210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.718239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.718594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.718622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.719003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.719033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.719471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.719500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.719871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.719902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.720265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.720294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.720663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.720695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.721037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.721066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.721458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.721488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.721910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.722313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.722348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.722705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.722735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.723139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.723487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.723517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.723781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.723811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.724199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.724229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.724566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.724596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.724949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.724979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.725341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.725370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.725819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.725849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.726214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.070 [2024-12-06 18:42:16.726242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.070 qpair failed and we were unable to recover it. 00:30:22.070 [2024-12-06 18:42:16.726601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.726630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.727004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.727033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.727393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.727423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.727781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.728174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.728203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.728654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.728685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.729112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.729141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.729502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.729530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.729881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.729911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.730248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.730278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.730539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.730568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.730954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.730984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.731327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.731357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.731698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.731729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.731984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.732013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.732259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.732291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.732632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.732678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.733049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.733078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.733442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.733471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.733735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.733765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.734144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.734173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.734521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.734551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.734907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.734937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.735288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.735318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.735691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.735722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.736080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.736110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.736470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.736499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.736864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.737180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.737209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.737571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.737600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.737962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.737993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.738352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.738380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.738735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.738764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.739116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.739155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.739525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.739554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.739898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.739930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.740289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.740318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.740681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.741087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.741116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.741504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.741533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.741885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.741915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.742272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.742301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.742553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.742582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.743061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.743092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.743443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.743473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.743835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.743866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.744220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.744249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.744614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.744650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.744995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.745024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.745384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.071 [2024-12-06 18:42:16.745412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.071 qpair failed and we were unable to recover it. 00:30:22.071 [2024-12-06 18:42:16.745768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.745797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.746184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.746212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.746573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.746603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.746975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.747375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.747404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.747767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.747797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.748150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.748185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.748542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.748571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.749011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.749041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.749403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.749434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.749805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.749834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.750203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.750231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.750589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.750618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.750957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.750987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.751347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.751376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.751741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.751771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.752167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.752197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.752551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.752579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.752932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.752962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.753338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.753367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.753802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.753831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.754170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.754199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.754561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.754591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.755022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.755052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.755414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.755443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.755796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.755826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.756192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.756221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.756585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.756615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.756981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.757010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.757262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.757294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.757662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.757692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.758089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.758117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.758476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.758506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.758914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.758945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.759282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.759312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.759672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.759702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.760054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.760085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.760440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.760470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.760809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.760838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.761219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.761249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.761612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.761648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.761993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.762023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.762392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.762422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.762782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.762811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.763187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.763217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.763565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.763595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.763933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.763969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.764354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.764712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.764741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.765096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.765125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.765494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.765523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.765745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.765776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.766141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.766170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.766525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.072 [2024-12-06 18:42:16.766555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.072 qpair failed and we were unable to recover it. 00:30:22.072 [2024-12-06 18:42:16.766918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.766948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.767316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.767345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.767715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.767744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.768109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.768138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.768485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.768515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.768858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.768887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.769244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.769274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.769644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.769675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.770026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.770055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.770423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.770453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.770819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.770849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.771209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.771238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.771605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.771635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.772032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.772063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.772420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.772450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.772809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.772841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.773202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.773233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.773597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.773627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.773930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.773961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.774315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.774345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.774687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.774716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.775104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.775135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.775544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.775574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.775921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.775959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.776338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.776367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.776729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.776759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.777116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.777145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.777527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.777556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.777892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.777923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.778251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.778281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.778536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.778566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.778906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.778935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.779307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.779342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.779685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.779715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.780085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.780113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.780471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.780500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.780813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.781215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.781244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.781599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.781629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.781999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.782373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.782403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.782752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.782781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.783131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.783160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.783543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.783572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.783916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.783946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.784336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.784365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.784723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.784754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.784938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.784968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.785367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.785396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.785761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.785792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.786153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.786183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.786555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.786586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.786937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.786967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.787324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.787353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.073 [2024-12-06 18:42:16.787808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.073 [2024-12-06 18:42:16.787839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.073 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.788222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.788585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.788613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.789015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.789045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.789405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.789435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.789798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.789829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.790180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.790211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.790546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.790585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.790936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.790986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.791387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.791439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.791831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.791882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.792279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.792333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.792685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.793153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.793560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.793615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.794085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.794519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.794570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.795003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.795054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.795448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.795505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.795945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.795998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.796311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.796364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.796771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.796826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.797234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.797285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.797683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.797739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.798019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.798072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.798469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.798517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.798903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.798938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.799309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.799339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.799784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.799816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.800187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.800217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.800476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.800503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.800833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.800864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.801121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.801151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.801557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.801586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.801942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.801972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.802345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.802374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.802714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.802744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.803062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.803091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.803388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.803417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.803776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.803806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.804218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.804248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.804593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.804622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.805055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.805085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.805321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.805351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.805701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.805730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.805979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.806009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.806366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.806395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.806758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.806787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.807156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.807185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.807561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.074 [2024-12-06 18:42:16.807590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.074 qpair failed and we were unable to recover it. 00:30:22.074 [2024-12-06 18:42:16.808028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.808058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.808415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.808442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.808797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.808827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.809199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.809228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.809594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.809622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.809984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.810014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.810372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.810402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.810747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.810777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.811141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.811177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.811528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.811557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.811893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.811923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.812283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.812312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.812694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.812741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.813082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.813117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.813401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.813431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.813668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.813699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.814050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.814081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.814354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.814383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.814707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.815069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.815449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.815618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.815656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.815999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.816030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.816366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.816395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.816752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.816783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.817134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.817163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.817527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.817916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.817947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.818312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.818341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.818699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.818729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.818885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.818918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.819312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.819342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.819594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.819623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.820039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.820070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.820305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.820335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.820696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.820727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.821159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.821189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.821541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.821569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.821843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.821873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.822260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.822291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.822658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.822688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.822946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.822975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.823348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.823377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.823716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.823745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.824142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.824171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.824550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.824578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.824968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.825006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.825350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.825379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.825628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.825671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.826017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.826045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.826436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.826465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.826777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.826807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.827159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.827189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.827566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.075 [2024-12-06 18:42:16.827595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.075 qpair failed and we were unable to recover it. 00:30:22.075 [2024-12-06 18:42:16.827935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.827964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.828218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.828246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.828599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.828628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.828993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.829023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.829382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.829410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.829756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.829787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.830165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.830196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.830555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.830585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.830962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.831368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.831395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.831615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.831652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.832001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.832030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.832389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.832419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.076 [2024-12-06 18:42:16.832764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.076 [2024-12-06 18:42:16.832793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.076 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.833168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.833201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.833559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.833591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.833938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.833967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.834328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.834357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.834717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.834747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.835010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.835038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.835390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.835775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.835807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.836195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.836224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.836472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.836502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.836840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.836870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.837244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.837274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.837515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.837544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.837782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.837812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.838162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.838193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.838547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.838576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.838799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.838830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.839207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.839572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.839602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.839975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.840006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.840374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.840409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.840633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.840670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.841059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.841088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.841450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.841478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.841817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.349 [2024-12-06 18:42:16.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.349 qpair failed and we were unable to recover it. 00:30:22.349 [2024-12-06 18:42:16.842220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.842249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.842613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.842669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.843069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.843097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.843324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.843355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.843722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.843752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.844106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.844134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.844488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.844517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.844896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.844926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.845289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.845318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.845544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.845572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.845911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.845941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.846190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.846219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.846601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.846630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.846962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.846991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.847355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.847384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.847763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.847793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.848141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.848170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.848401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.848429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.848825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.848856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.849230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.849259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.849621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.849673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.850051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.850334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.850363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.850703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.850733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.851099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.851127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.851501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.851529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.851905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.851935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.852243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.852273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.852627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.852664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.853025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.853053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.853432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.853461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.853834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.853864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.854204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.854233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.854596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.854624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.854880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.854912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.855168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.855205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.855428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.855460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.855846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.350 [2024-12-06 18:42:16.855877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.350 qpair failed and we were unable to recover it. 00:30:22.350 [2024-12-06 18:42:16.856238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.856269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.856698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.856729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.857103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.857131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.857517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.857546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.857889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.857919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.858168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.858201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.858574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.858604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.859017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.859048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.859431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.859460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.859729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.859760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.860140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.860171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.860535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.860565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.860915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.860945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.861305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.861337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.861696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.861726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.862096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.862126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.862466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.862495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.862867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.862897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.863266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.863297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.863661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.863691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.863918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.863950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.864295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.864325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.864714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.865088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.865119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.865471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.865500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.865864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.865895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.866266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.866294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.866538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.866569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.866959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.866990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.867227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.867258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.867615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.867653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.868006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.868036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.868394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.868422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.868794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.868826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.869185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.869215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.869344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.869376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.869712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.869744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.351 [2024-12-06 18:42:16.870132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.351 [2024-12-06 18:42:16.870168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.351 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.870538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.870568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.870918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.870949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.871309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.871339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.871696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.871727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.872068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.872098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.872459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.872488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.872739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.872774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.873166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.873196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.873446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.873805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.874200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.874229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.874596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.874990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.875020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.875400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.875429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.875804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.875834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.876090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.876119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.876491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.876521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.876910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.876941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.877288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.877317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.877665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.877697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.878018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.878047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.878428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.878457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.878820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.878852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.879228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.879519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.879546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.879904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.879934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.880288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.880319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.880691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.880722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.881074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.881102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.881475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.881505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.881868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.881900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.882337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.882366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.882617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.882653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.883020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.883049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.883431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.883799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.883828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.884196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.884226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.352 [2024-12-06 18:42:16.884587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.352 [2024-12-06 18:42:16.884615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.352 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.885014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.885043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.885378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.885414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.885759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.885791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.886047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.886076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.886447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.886476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.886826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.886855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.887226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.887256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.887618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.887655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.888024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.888053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.888412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.888441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.888805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.888835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.889207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.889236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.889597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.889625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.889982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.890011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.890378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.890407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.890779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.891163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.891192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.891552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.891581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.891945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.891975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.892325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.892354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.892726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.892756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.893122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.893150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.893510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.893539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.893910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.893939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.894278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.894306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.894668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.894699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.895046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.895075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.895443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.895472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.895730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.895759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.896117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.896146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.896511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.896541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.896894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.353 [2024-12-06 18:42:16.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.353 qpair failed and we were unable to recover it. 00:30:22.353 [2024-12-06 18:42:16.897176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.897205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.897553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.897582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.897998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.898028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.898381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.898411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.898777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.898807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.899158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.899188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.899516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.899544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.899903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.899934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.900294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.900323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.900695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.900731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.901095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.901124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.901481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.901510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.901890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.901920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.902167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.902196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.902593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.902621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.903032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.903062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.903426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.903454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.903819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.903849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.904210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.904239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.904600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.904628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.904969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.904998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.905214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.905243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.905647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.905678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.906036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.906065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.906420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.906450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.906811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.906841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.907196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.907224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.907586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.907615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.908019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.908365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.908394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.908776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.908807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.909071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.909099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.909443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.909472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.909747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.909776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.910161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.910190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.910555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.910583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.910961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.910990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.911337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.354 [2024-12-06 18:42:16.911366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.354 qpair failed and we were unable to recover it. 00:30:22.354 [2024-12-06 18:42:16.911624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.911662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.912105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.912134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.912510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.912538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.912887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.912918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.913286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.913315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.913666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.913697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.914056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.914085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.914445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.914474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.914843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.914872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.915231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.915260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.915631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.915667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.916067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.916425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.916453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.916812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.916842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.917204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.917233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.917597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.917625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.917981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.918012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.918385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.918849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.918879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.919289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.919318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.920076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.920106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.920434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.920462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.920830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.920861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.921234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.921262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.921624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.921662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.921998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.922037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.922405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.922433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.922795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.922825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.923169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.923199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.923558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.923587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.923961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.923990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.924358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.924388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.924735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.924765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.925134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.925164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.925523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.925551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.925916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.925946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.926313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.355 [2024-12-06 18:42:16.926342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.355 qpair failed and we were unable to recover it. 00:30:22.355 [2024-12-06 18:42:16.926701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.926736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.927098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.927127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.927500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.927529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.927918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.928312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.928764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.928794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.929031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.929062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.929442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.929471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.929800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.929830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.930223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.930595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.930623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.931113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.931147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.931494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.931523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.931892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.931922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.932284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.932313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.932677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.932707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.932992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.933020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.933372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.933400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.933765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.933795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.934163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.934191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.934549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.934577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.934936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.934965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.935324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.935353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.935713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.935743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.936114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.936142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.936515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.936543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.936914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.936944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.937244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.937273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.937644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.937674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.938031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.938060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.938345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.938373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.938733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.938766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.939125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.939153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.939477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.939506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.939862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.939894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.940277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.940306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.940675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.940706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.941093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.356 [2024-12-06 18:42:16.941121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.356 qpair failed and we were unable to recover it. 00:30:22.356 [2024-12-06 18:42:16.941491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.941520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.941866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.941897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.942252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.942287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.942628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.942683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.943041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.943070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.943432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.943461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.943824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.943853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.944221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.944251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.944613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.944664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.945004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.945034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.945287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.945316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.945662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.945693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.946047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.946076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.946422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.946452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.946814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.946843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.947202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.947231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.947586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.947614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.947998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.948028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.948390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.948420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.948849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.948879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.949321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.949351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.949736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.950108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.950137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.950478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.950507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.950898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.950927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.951280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.951310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.951673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.951704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.952078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.952106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.952473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.952502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.952871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.952900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.953240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.953269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.953578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.953608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.953982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.954012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.954386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.954415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.954780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.954809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.955166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.955194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.955542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.955571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.955939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.955969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.357 [2024-12-06 18:42:16.956322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.357 [2024-12-06 18:42:16.956352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.357 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.956736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.956766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.957105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.957135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.957496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.957524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.957914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.957950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.958308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.958337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.958687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.958717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.959092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.959121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.959479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.959508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.959781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.959810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.960174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.960204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.960576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.960605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.960981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.961011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.961355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.961386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.961747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.961776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.962148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.962390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.962419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.962785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.962815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.963175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.963204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.963559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.963587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.963950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.964365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.964395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.964769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.964799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.965163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.965192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.965557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.965585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.965936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.965965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.966335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.966364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.966721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.966751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.967099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.967128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.967491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.967521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.967886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.967916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.968277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.968306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.968686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.969046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.358 [2024-12-06 18:42:16.969075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.358 qpair failed and we were unable to recover it. 00:30:22.358 [2024-12-06 18:42:16.969438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.969467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.969822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.969851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.970208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.970237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.970600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.970628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.971006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.971035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.971384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.971414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.971749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.971784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.972121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.972150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.972520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.972549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.972907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.973294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.973329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.973684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.973714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.974074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.974102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.974471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.974499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.974868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.974897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.975233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.975668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.975699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.976031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.976059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.976421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.976449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.976810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.976840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.977277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.977306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.977663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.977693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.978073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.978102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.978422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.978451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.978813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.978843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.979203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.979232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.979596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.979624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.979980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.980009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.980383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.980412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.980769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.980799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.981150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.981179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.981535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.981564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.981909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.981939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.982303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.982333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.982703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.982732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.983089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.983117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.983460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.983489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.983848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.983879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.359 qpair failed and we were unable to recover it. 00:30:22.359 [2024-12-06 18:42:16.984236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.359 [2024-12-06 18:42:16.984265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.984630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.984670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.985032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.985061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.985428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.985457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.985815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.985846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.986206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.986235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.986484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.986513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.986909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.986938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.987382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.987412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.987787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.987817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.988195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.988223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.988472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.988501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.988820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.988856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.989211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.989239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.989604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.989633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.990038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.990068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.990392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.990677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.990708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.991089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.991117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.991492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.991521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.991863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.991894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.992270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.992298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.992656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.992685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.993103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.993132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.993390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.993418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.993769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.993798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.994136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.994166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.994522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.994551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.994891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.994922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.995287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.995316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.995669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.995698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.996055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.996083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.996334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.996363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.996713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.996743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.997143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.997173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.997528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.997556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.997910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.997939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.998185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.998213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.360 [2024-12-06 18:42:16.998564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.360 [2024-12-06 18:42:16.998593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.360 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:16.999090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:16.999122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:16.999369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:16.999398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:16.999794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:16.999824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.000187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.000217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.000567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.000907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.000938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.001304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.001334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.001707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.001737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.002117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.002503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.002531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.002777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.002806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.003037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.003066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.003429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.003457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.003817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.003855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.004204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.004233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.004608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.004658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.005040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.005069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.005329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.005357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.005701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.005731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.006096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.006125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.006485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.006514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.006868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.006898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.007253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.007283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.007661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.007691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.008038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.008067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.008458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.008487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.008866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.008897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.009269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.009298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.009659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.009690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.010047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.010076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.010434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.010462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.010826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.010857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.011223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.011252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.011606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.011635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.012000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.012029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.012389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.012417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.012788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.012818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.361 [2024-12-06 18:42:17.013173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.361 [2024-12-06 18:42:17.013201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.361 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.013561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.013590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.013989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.014386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.014415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.014851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.014881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.015234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.015264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.015618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.015655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.016022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.016052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.016422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.016450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.016817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.016849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.017218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.017247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.017606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.017634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.018080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.018109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.018459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.018488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.018857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.018886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.019290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.019319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.019562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.019601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.019984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.020014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.020259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.020287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.020632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.020678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.021021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.021051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.021429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.021458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.021841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.021871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.022218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.022247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.022605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.022635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.022872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.022901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.023275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.023304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.023670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.023700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.024124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.024152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.024502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.024530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.024882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.024912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.025274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.025302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.025661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.025690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.026047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.026075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.026444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.362 [2024-12-06 18:42:17.026473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.362 qpair failed and we were unable to recover it. 00:30:22.362 [2024-12-06 18:42:17.026740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.026770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.027051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.027079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.027427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.027455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.027745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.027775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.028160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.028189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.028446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.028474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.028818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.028847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.029188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.029216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.029577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.029606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.030039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.030069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.030431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.030460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.030826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.030857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.031214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.031245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.031602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.031630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.032000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.032395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.032423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.032897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.032927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.033285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.033314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.033667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.033697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.034050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.034079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.034438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.034467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.034733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.034788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.035222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.035251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.035612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.035650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.036001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.036029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.036407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.036435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.036796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.036826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.037187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.037215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.037579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.037610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.038029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.038060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.038346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.038374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.038820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.038850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.039152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.039182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.039520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.039548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.039879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.039910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.040280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.040309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.040669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.040698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.363 qpair failed and we were unable to recover it. 00:30:22.363 [2024-12-06 18:42:17.041064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.363 [2024-12-06 18:42:17.041092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.041463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.041493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.041865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.041895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.042241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.042271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.042517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.042546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.042927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.042957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.043328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.043355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.043719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.043748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.044111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.044139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.044510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.044538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.044895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.044925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.045347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.045730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.045760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.045983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.046015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.046371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.046401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.046773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.046804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.047147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.047177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.047537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.047566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.047902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.047932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.048321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.048690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.048720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.049094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.049122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.049504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.049533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.049786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.049816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.050157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.050192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.050434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.050463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.050870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.051240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.051269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.051628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.051679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.052014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.052043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.052412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.052441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.052806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.052836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.053070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.053100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.053391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.053421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.053777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.053807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.054172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.054202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.054562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.054591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.364 qpair failed and we were unable to recover it. 00:30:22.364 [2024-12-06 18:42:17.054953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.364 [2024-12-06 18:42:17.054982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.055355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.055385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.055755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.055785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.056146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.056173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.056517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.056546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.056902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.056932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.057355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.057383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.057727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.058154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.058404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.058432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.058779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.058817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.059186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.059215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.059584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.059613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.059958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.059986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.060431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.060463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.060814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.060845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.061207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.061236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.061601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.061631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.062005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.062034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.062404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.062433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.062701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.062731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.063091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.063122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.063474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.063503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.063682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.063712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.064082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.064112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.064472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.064501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.064867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.064898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.065255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.065291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.065654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.065683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.066035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.066064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.066428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.066457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.066831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.066861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.067270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.067299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.067668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.067699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.068076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.068104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.068465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.068495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.068883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.069250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.069279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.365 [2024-12-06 18:42:17.069649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.365 [2024-12-06 18:42:17.069680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.365 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.070040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.070069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.070440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.070470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.070837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.070868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.071223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.071253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.071475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.071508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.071841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.071872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.072123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.072155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.072400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.072432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.072766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.072797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.073161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.073191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.073552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.073582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.073957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.073989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.074377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.074409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.074836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.074867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.075227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.075255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.075621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.076030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.076059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.076421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.076449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.076881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.076912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.077270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.077309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.077669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.077699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.077955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.077985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.078351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.078381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.078758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.078789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.079195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.079225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.079582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.079611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.079914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.079947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.080298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.080328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.080662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.080700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.081044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.081074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.081435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.081466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.081726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.081757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.082116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.082146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.082502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.082532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.082934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.082965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.083302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.083339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.083680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.083711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.366 [2024-12-06 18:42:17.083975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.366 [2024-12-06 18:42:17.084003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.366 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.084350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.084380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.084749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.084781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.085155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.085185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.085561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.085591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.085960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.085991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.086354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.086385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.086758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.086788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.087229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.087257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.087504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.087536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.087894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.087927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.088279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.088310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.088668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.088699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.089047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.089075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.089416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.089446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.089825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.089859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.090070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.090101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.090488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.090517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.090875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.090906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.091269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.091301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.091662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.091691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.092043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.092073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.092464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.092494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.092848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.092878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.093249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.093278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.093614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.093654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.094000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.094031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.094366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.094395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.094757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.094789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.095169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.095199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.095572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.095602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.095973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.096010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.096257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.096287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.367 [2024-12-06 18:42:17.096552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.367 qpair failed and we were unable to recover it. 00:30:22.367 [2024-12-06 18:42:17.096901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.096933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.097284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.097314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.097661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.097691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.098046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.098076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.098360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.098390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.098741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.098771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.099139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.099168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.099538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.099566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.099900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.099930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.100265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.100294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.100633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.100674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.101102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.101462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.101490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.101867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.101898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.102250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.102279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.102655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.102685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.103034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.103064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.103499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.103527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.103879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.103909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.104270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.104299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.104675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.104705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.104963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.104994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.105420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.105450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.105830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.106201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.106231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.106674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.106705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.107063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.107092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.107348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.107380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.107731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.107761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.108107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.108135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.108476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.108506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.108872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.108904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.109263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.109292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.109658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.109688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.110103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.110133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.110393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.110422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.110745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.110776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.111123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.368 [2024-12-06 18:42:17.111159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.368 qpair failed and we were unable to recover it. 00:30:22.368 [2024-12-06 18:42:17.111506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.111535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.111793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.111824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.112205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.112233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.112569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.112598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.112959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.112989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.113356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.113385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.113835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.113866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.114229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.114259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.114518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.114551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.114892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.114922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.115284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.115313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.115669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.115699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.116054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.116083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.116449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.116478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.116816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.116845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.117213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.117242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.117618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.117655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.118028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.118057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.118429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.118458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.118814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.118844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.119112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.119141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.119489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.119900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.120292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.120321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.120674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.120704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.369 [2024-12-06 18:42:17.120998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.369 [2024-12-06 18:42:17.121026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.369 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.121277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.121316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.121733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.122074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.122103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.122465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.122758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.122787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.123197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.123228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.123480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.123511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.123890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.123921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.124312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.124678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.124707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.125059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.125088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.125529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.125559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.125918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.125949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.126267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.642 qpair failed and we were unable to recover it. 00:30:22.642 [2024-12-06 18:42:17.126636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.642 [2024-12-06 18:42:17.126687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.127035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.127065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.127439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.127468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.127789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.127828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.128187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.128216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.128460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.128489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.128874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.128904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.129243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.129273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.129657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.129687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.130006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.130035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.130418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.130682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.130713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.131052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.131080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.131450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.131479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.131823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.131855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.132208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.132237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.132608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.132644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.132982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.133014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.133372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.133401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.133757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.133786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.134123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.134152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.134515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.134544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.134907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.134936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.135168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.135197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.135519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.135548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.135890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.135920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.136284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.136319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.136662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.136691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.137051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.137079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.137445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.137474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.137713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.137742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.138093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.138122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.138478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.138507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.138861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.138892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.139227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.139256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.139590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.139619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.139988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.643 [2024-12-06 18:42:17.140018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.643 qpair failed and we were unable to recover it. 00:30:22.643 [2024-12-06 18:42:17.140291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.140320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.140670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.140701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.141104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.141133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.141502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.141531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.141888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.141917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.142299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.142666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.142695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.142930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.142959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.143290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.143322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.143675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.143707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.144053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.144084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.144456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.144485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.144723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.144753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.145153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.145183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.145441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.145472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.145735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.145767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.146144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.146174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.146522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.146552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.146896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.146925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.147269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.147666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.147696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.148052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.148080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.148423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.148452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.148794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.148825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.149052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.149081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.149420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.149450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.149796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.149825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.150197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.150226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.150589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.150618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.150976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.151012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.151369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.151398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.151756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.151786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.152137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.152166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.152533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.152562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.152953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.153384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.153742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.153773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.644 [2024-12-06 18:42:17.154104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.644 [2024-12-06 18:42:17.154133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.644 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.154493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.154524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.154897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.154927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.155178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.155207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.155463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.155497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.155867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.155907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.156290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.156340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.156714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.156769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.157030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.157475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.157523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.157892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.157927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.158203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.158232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.158567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.158596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.159038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.159069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.159329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.159361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.159728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.159759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.160131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.160160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.160283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.160314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.160725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.161121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.161150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.161292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.161324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.161663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.161692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.162069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.162098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.162438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.162468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.162729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.162760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.163125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.163154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.163511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.163539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.163789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.163819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.164158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.164188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.164554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.164584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.164929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.164959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.165328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.165357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.165728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.165775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.166147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.166176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.166532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.166561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.166933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.166963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.167342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.167370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.167669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.167698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.168074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.168103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.645 [2024-12-06 18:42:17.168433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.645 [2024-12-06 18:42:17.168463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.645 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.168820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.168851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.169246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.169275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.169657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.169686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.169944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.169972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.170346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.170375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.170721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.170752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.171151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.171530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.171560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.171815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.171845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.172205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.172235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.172593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.172622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.173015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.173044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.173409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.173438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.173839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.173868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.174222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.174251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.174677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.174707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.175076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.175105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.175465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.175493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.175869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.175899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.176266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.176295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.176667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.176697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.177073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.177102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.177431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.177469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.177823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.177854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.178246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.178276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.178655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.178685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.179041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.179071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.179443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.179471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.179861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.179891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.180262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.180291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.180536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.180567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.180988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.181020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.181358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.181395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.181858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.181888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.182249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.182279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.182658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.182687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.183077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.183107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.183468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.646 [2024-12-06 18:42:17.183498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.646 qpair failed and we were unable to recover it. 00:30:22.646 [2024-12-06 18:42:17.183851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.183881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.184315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.184344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.184596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.184978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.185007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.185370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.185400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.185789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.185819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.186065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.186097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.186449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.186478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.186817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.186848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.187212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.187241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.187584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.187614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.188072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.188102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.188361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.188390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.188628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.188668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.189022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.189050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.189416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.189445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.189872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.189903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.190260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.190289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.190670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.190700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.191087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.191117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.191487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.191516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.191874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.191904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.192289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.192635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.192672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.193086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.193115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.193487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.193516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.193913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.194273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.194303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.194658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.194687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.195122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.647 [2024-12-06 18:42:17.195151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.647 qpair failed and we were unable to recover it. 00:30:22.647 [2024-12-06 18:42:17.195444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.195472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.195804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.195833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.196203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.196232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.197046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.197082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.197418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.197447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.197753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.197784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.198161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.198190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.198549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.198579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.198959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.199355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.199384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.199654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.199683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.199922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.199951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.200344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.200372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.200714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.200744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.201102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.201131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.201499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.201528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.201894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.201925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.202297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.202328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.202559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.202588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.202868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.202898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.203268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.203298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.203659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.203689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.204021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.204050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.204359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.204389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.204724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.204753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.205127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.205156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.205534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.205563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.205933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.205964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.206311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.206339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.206608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.206636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.207027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.207057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.207414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.207840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.207869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.208217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.208246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.208605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.208634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.209026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.209055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.209414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.209442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.209816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.209846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.648 qpair failed and we were unable to recover it. 00:30:22.648 [2024-12-06 18:42:17.210257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.648 [2024-12-06 18:42:17.210287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.210664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.210694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.210995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.211023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.211407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.211436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.212078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.212115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.212449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.212478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.212738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.212768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.213141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.213170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.213523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.213554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.213919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.213950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.214356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.214708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.214736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.215090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.215120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.215468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.215497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.215833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.215862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.216218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.216248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.216605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.216636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.217008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.217036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.217415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.217444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.217816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.217847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.218204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.218232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.218593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.218621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.218953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.218983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.219364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.219392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.219758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.219789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.220165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.220195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.220565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.220594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.221014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.221043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.221402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.221432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.221777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.221807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.222084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.222113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.222480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.222510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.222890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.222921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.223184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.223212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.223564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.223593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.224020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.224050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.649 [2024-12-06 18:42:17.224440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.649 [2024-12-06 18:42:17.224469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.649 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.224781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.224811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.225193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.225223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.225586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.225614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.225992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.226022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.226402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.226697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.226728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.226967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.227000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.227310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.227346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.227708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.227739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.228137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.228166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.228534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.228564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.228911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.228941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.229243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.229274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.229652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.229682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.229934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.229963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.230317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.230347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.230735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.230766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.231125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.231155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.231526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.231555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.231959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.231988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.232348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.232376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.232652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.232682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.233092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.233121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.233479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.233508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.233854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.233883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.234243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.234272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.234657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.234687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.235030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.235059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.235421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.235450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.235717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.235747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.236110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.236140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.236479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.236509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.236914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.236943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.237315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.237343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.237692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.238103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.238132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.238501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.238529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.238877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.238907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.239073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.650 [2024-12-06 18:42:17.239101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.650 qpair failed and we were unable to recover it. 00:30:22.650 [2024-12-06 18:42:17.239477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.239506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.239851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.239883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.240143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.240173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.240550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.240579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.241038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.241069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.241426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.241456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.241761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.241791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.242164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.242194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.242489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.242829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.242861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.243211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.243242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.243619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.243659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.244042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.244073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.244358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.244387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.244660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.244690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.245072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.245102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.245480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.245510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.245773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.245803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.246098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.246127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.246457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.246485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.246845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.246876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.247131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.247160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.247514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.247543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.247880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.247911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.248273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.248302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.248675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.248704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.249087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.249116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.249452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.249482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.249880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.249912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.250282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.250311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.250678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.250708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.251067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.651 [2024-12-06 18:42:17.251096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.651 qpair failed and we were unable to recover it. 00:30:22.651 [2024-12-06 18:42:17.251504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.251534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.251884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.251914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.252276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.252306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.252728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.252760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.252997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.253026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.253413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.253443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.253811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.253843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.254291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.254322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.254682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.254712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.255138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.255168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.255540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.255896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.255927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.256265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.256295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.256537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.256566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.256928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.256959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.257326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.257354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.257716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.257753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.258003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.258032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.258275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.258304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.258715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.258744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.259113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.259142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.259484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.259514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.259767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.259799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.260130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.260158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.260518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.260548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.260958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.260987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.261332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.261361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.261682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.261712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.262177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.262205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.262569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.262597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.262968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.262998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.263353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.263381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.263692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.263722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.264084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.264112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.264595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.264624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.265023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.265053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.652 qpair failed and we were unable to recover it. 00:30:22.652 [2024-12-06 18:42:17.265455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.652 [2024-12-06 18:42:17.265484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.265833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.265863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.266244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.266273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.266628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.266664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.267032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.267060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.267300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.267329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.267687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.267717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.268081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.268111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.268490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.268519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.268775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.268805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.269069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.269097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.269475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.269723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.269757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.270102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.270131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.270487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.270516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.270910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.270940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.271307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.271335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.271683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.271713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.272065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.272095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.272433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.272462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.272806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.272843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.273171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.273201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.273569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.273598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.274050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.274080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.274449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.274478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.274901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.274932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.275275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.275305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.275677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.275708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.276053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.276083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.276334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.276363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.276585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.276618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.276986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.277389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.277417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.277684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.277713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.278099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.278128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.278500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.278532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.278898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.278930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.279158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.279187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.279579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.279607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.279853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.653 [2024-12-06 18:42:17.279883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.653 qpair failed and we were unable to recover it. 00:30:22.653 [2024-12-06 18:42:17.280334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.280696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.280734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.281110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.281140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.281486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.281515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.281869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.281900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.282261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.282290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.282660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.282690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.283077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.283106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.283469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.283500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.283870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.283901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.284126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.284155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.284514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.284753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.284785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.285033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.285062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.285473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.285504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.285873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.285903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.286268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.286296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.286669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.286699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.287062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.287091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.287458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.287487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.287857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.287893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.290260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.290344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.290570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.290609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.290873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.290906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.291258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.291287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.291732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.291763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.292132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.292161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.292523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.292552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.292783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.292815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.293160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.293190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.293558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.293587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.293923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.293953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.294313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.294342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.294546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.294578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.294978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.295011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.295371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.295401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.295857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.295888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.296238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.296267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.296630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.296685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.297041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.297070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.654 [2024-12-06 18:42:17.297427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.654 [2024-12-06 18:42:17.297457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.654 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.297817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.297847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.298217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.298246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.298604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.298632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.299031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.299061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.299420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.299449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.299799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.299830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.300100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.300134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.300487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.300517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.300872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.300902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.301261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.301290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.301736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.301992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.302022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.302384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.302414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.302755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.302786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.303166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.303195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.303547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.303575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.303943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.303973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.304323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.304356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.304720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.304750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.305008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.305047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.305416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.305445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.305791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.305821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.306181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.306210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.306573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.306603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.307026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.307058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.307314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.307705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.307736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.308112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.308141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.308497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.308525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.308899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.308929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.309305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.309335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.309707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.309738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.310127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.310156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.310602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.310633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.311042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.311073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.311430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.311463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.655 [2024-12-06 18:42:17.311821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.655 [2024-12-06 18:42:17.311853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.655 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.312187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.312217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.312477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.312506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.312842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.312871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.313124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.313154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.313490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.313520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.313882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.313912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.314276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.314306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.314662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.314954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.314983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.315378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.315409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.315664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.315696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.316098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.316128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.316502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.316533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.316894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.316925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.317182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.317214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.317583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.317613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.318030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.318394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.318423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.318784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.318815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.319180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.319210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.319574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.319603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.319959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.319990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.320366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.320398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.320777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.320808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.321223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.321253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.321597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.321628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.321993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.322023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.322376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.322406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.322784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.322817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.324632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.324713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.325773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.325819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.326129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.656 [2024-12-06 18:42:17.326160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.656 qpair failed and we were unable to recover it. 00:30:22.656 [2024-12-06 18:42:17.326511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.326543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.326750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.326781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.327048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.327077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.327434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.327464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.327738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.327769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.328163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.328192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.328560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.328590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.328934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.328966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.329331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.329363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.329746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.329777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.330152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.330183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.330554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.330583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.330867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.330897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.331248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.331279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.331649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.331680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.331916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.331945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.332375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.332407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.332707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.333077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.333107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.333483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.333513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.333969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.334253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.334284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.334662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.334693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.335078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.335107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.335357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.335385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.335749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.335779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.336146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.336175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.336530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.336559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.336913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.336948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.337305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.337334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.337724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.337755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.338127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.338158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.338524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.338553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.338925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.339285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.339316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.339695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.339725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.340086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.340116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.340491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.657 [2024-12-06 18:42:17.340520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.657 qpair failed and we were unable to recover it. 00:30:22.657 [2024-12-06 18:42:17.340911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.340942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.341296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.341328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.341581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.341610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.341994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.342023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.342369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.342399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.342745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.342776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.343130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.343159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.343517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.343548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.343990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.344021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.344372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.344401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.344806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.344839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.345110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.345138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.345485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.345514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.345871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.345906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.346332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.346701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.346732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.347120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.347150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.347511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.347542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.347930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.347963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.348328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.348374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.348719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.348751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.349098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.349129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.349408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.349438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.349833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.349864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.350226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.350565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.350596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.350958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.350989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.351323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.351352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.351719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.352106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.352134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.352507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.352807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.352839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.353199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.353229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.353585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.353616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.354002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.354035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.354397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.354425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.354813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.354843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.658 [2024-12-06 18:42:17.355209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.658 [2024-12-06 18:42:17.355240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.658 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.355576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.355608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.357348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.357413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.357732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.357767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.358137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.358167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.358542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.358573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.358930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.358959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.359326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.359355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.359728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.359760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.360181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.360212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.360568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.360606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.360936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.361306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.361337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.361712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.361744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.362077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.362107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.362376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.362406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.362793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.362823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.363178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.363208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.363571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.363601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.363994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.364025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.364381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.364412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.364820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.365187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.365223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.365574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.365602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.365855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.365890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.366261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.366290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.366545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.366575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.366956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.366988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.367326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.367356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.367694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.367725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.368084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.368114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.368467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.368498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.368858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.368889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.369266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.369295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.369622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.369659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.370000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.370030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.370389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.370421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.370791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.370825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.659 [2024-12-06 18:42:17.371160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.659 [2024-12-06 18:42:17.371190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.659 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.371551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.371581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.371955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.371985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.372348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.372378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.372745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.372776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.373143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.373175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.373539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.373569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.373913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.373944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.374176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.374206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.374572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.374600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.374964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.374995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.375248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.375278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.375646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.375677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.376054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.376083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.376441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.376469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.376827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.376859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.377219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.377248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.377611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.377648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.377999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.378028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.378371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.378401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.378777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.378806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.379051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.379082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.379446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.379477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.379839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.379871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.380249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.380285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.380625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.380677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.381036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.381439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.381469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.381825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.381855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.382222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.382254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.382658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.382688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.383020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.383050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.383408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.383437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.660 [2024-12-06 18:42:17.383803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.660 [2024-12-06 18:42:17.383838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.660 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.384204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.384233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.384598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.384628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.384878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.384907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.385141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.385169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.385539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.385569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.385929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.385960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.386211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.386239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.386675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.387052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.387412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.387441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.387784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.387814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.388175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.388204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.388564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.388602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.389026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.389057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.389404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.389432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.389812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.389843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.390212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.390241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.390598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.390626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.391004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.391034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.391378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.391407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.391778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.391809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.392143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.392171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.392519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.392548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.392933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.392964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.393324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.393718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.393749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.394083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.394113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.394478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.394507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.394888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.394918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.395272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.395668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.395706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.396093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.396121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.396484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.396512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.396873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.396904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.397259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.397290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.397623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.397663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.398093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.661 [2024-12-06 18:42:17.398123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.661 qpair failed and we were unable to recover it. 00:30:22.661 [2024-12-06 18:42:17.398476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.398506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.398866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.398896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.399231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.399590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.399619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.399999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.400031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.400394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.400422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.400790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.400820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.401228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.401257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.401616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.401661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.402016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.402046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.402422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.402452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.402702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.402733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.403104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.403133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.403496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.403525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.403895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.403925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.404171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.404203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.404590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.404622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.404987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.405017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.405285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.405677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.405708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.406070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.406099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.406456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.406485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.406867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.406899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.407242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.407272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.407635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.407673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.408028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.408057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.408363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.408391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.408755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.408785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.409130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.409169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.409536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.409565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.409938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.409967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.410378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.410407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.410774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.410804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.411183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.411219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.411556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.411587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.411849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.411880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.412286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.412314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.412676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.412705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.662 qpair failed and we were unable to recover it. 00:30:22.662 [2024-12-06 18:42:17.413100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.662 [2024-12-06 18:42:17.413129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.663 qpair failed and we were unable to recover it. 00:30:22.663 [2024-12-06 18:42:17.413488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.663 [2024-12-06 18:42:17.413518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.663 qpair failed and we were unable to recover it. 00:30:22.663 [2024-12-06 18:42:17.413887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.663 [2024-12-06 18:42:17.413918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.663 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.414264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.414295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.414665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.414699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.415066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.415094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.415347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.415379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.415753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.415784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.418098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.418167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.418581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.418619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.419033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.419063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.419407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.419437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.936 [2024-12-06 18:42:17.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.936 [2024-12-06 18:42:17.419842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.936 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.420206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.420235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.420481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.420510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.420885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.420916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.421291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.421321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.421683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.421713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.422076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.422105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.422367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.422396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.422779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.422811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.423163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.423192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.423462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.423495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.423866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.423897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.424256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.424285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.424658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.424690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.425043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.425073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.425413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.425443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.425840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.425870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.426237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.426265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.426627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.426665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.426990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.427019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.427385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.427415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.427783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.427814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.428169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.428198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.428567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.428604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.428963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.428992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.429349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.429380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.429778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.430138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.430167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.430527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.430556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.430929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.430962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.431261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.431292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.431718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.431750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.432091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.432129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.432492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.432521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.937 qpair failed and we were unable to recover it. 00:30:22.937 [2024-12-06 18:42:17.432866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.937 [2024-12-06 18:42:17.432897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.433261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.433290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.433656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.433689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.434033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.434063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.434406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.434435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.434795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.434826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.435190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.435220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.435585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.435614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.436019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.436051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.436406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.436437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.436866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.436896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.437258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.437286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.437658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.437688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.438046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.438075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.438441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.438471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.438823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.438852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.439217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.439247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.439663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.440007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.440036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.440393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.440423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.440791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.440822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.441163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.441191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.441532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.441561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.441921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.442316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.442346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.442702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.442733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.443096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.443124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.443490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.443518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.443790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.443819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.444175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.444209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.444555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.444584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.444948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.444981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.445329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.445358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.938 [2024-12-06 18:42:17.445720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.938 [2024-12-06 18:42:17.445750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.938 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.446121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.446150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.446503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.446532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.446886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.446916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.447283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.447313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.447674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.447706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.448053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.448082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.448457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.448487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.448861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.448892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.449250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.449280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.449635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.449676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.450072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.450102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.450447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.450476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.450895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.450925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.451253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.451283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.451716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.451747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.452103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.452132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.452370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.452398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.452790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.453158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.453186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.453546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.453575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.453925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.453955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.454261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.454291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.454659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.454691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.455052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.455081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.455447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.455477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.455947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.455978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.456336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.456369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.456766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.456987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.457016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.457382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.457411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.457778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.457807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.458162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.939 [2024-12-06 18:42:17.458191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.939 qpair failed and we were unable to recover it. 00:30:22.939 [2024-12-06 18:42:17.458539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.458568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.458927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.458956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.459318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.459346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.459687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.459723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.460090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.460121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.460386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.460414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.460765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.460796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.461147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.461176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.461541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.461578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.461943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.461973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.462344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.462372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.462732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.462762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.463131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.463159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.463418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.463447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.463797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.463827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.464199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.464229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.464577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.464606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.465006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.465036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.465399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.465430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.465798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.465829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.466202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.466231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.466587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.466615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.466987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.467018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.467378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.467408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.467770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.467801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.468180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.468539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.468568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.468940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.468969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.469404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.469828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.469858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.470225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.470254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.470610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.470648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.940 [2024-12-06 18:42:17.471047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.940 qpair failed and we were unable to recover it. 00:30:22.940 [2024-12-06 18:42:17.471415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.471443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.471802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.471832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.472299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.472667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.472697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.473048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.473078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.473443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.473472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.473830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.473862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.474209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.474240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.474599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.474629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.475052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.475081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.475416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.475843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.475874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.476239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.476271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.476645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.476677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.477034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.477418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.477447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.477805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.477834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.478090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.478119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.478516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.478881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.478913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.479253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.479282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.479654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.479684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.480080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.480109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.480457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.480486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.480850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.480882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.481322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.481352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.941 [2024-12-06 18:42:17.481706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.941 [2024-12-06 18:42:17.481736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.941 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.482103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.482132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.482493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.482522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.482884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.482914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.483274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.483305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.483667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.483698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.484141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.484170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.484541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.484571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.484830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.484861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.485092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.485121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.485462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.485500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.485738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.485771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.486149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.486178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.486539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.486567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.486930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.486960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.487330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.487361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.487697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.487728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.487969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.487998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.488355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.488384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.488757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.488787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.489188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.489217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.489570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.489600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.489985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.490018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.490377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.490406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.490765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.490807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.491175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.491563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.491592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.492004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.492035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.492393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.492424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.492785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.492815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.493177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.493206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.493573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.493601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.493858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.942 [2024-12-06 18:42:17.493891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.942 qpair failed and we were unable to recover it. 00:30:22.942 [2024-12-06 18:42:17.494274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.494305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.494651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.494682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.495046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.495075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.495345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.495374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.495720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.495751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.496114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.496145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.496446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.496475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.496869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.496901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.497268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.497298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.497729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.497758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.498135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.498164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.498526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.498554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.498947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.498978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.499339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.499368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.499760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.500123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.500486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.500514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.500769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.500803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.501187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.501216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.501557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.501587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.501879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.501909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.502284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.502312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.502548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.502580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.502954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.502984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.503356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.503386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.503752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.504137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.504166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.504418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.504449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.504817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.504848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.505211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.505240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.505604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.505634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.505986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.506022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.506327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.943 qpair failed and we were unable to recover it. 00:30:22.943 [2024-12-06 18:42:17.506696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.943 [2024-12-06 18:42:17.506727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.507094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.507122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.507487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.507516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.507883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.507914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.508240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.508270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.508507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.508539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.508916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.508947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.509314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.509342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.509584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.509615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.509994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.510026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.510372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.510403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.510760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.510791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.511161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.511192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.511543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.511572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.511939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.511969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.512203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.512235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.512609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.512649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.513006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.513036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.513399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.513428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.513795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.513825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.514185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.514214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.514341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.514373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.514745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.514777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.515201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.515231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.515567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.515596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.515976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.516007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.516372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.516401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.516838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.516868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.517233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.517264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.517598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.517629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.517995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.518024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.518388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.518418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.518815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.944 qpair failed and we were unable to recover it. 00:30:22.944 [2024-12-06 18:42:17.519180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.944 [2024-12-06 18:42:17.519210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.519570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.519599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.519972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.520002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.520368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.520396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.520764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.520793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.521188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.521216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.521551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.521583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.521925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.521955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.522313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.522342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.522707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.522737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.523099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.523129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.523489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.523520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.523880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.523910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.524290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.524320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.524682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.524712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.525107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.525135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.525480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.525509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.525744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.525776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.526139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.526170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.526524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.526554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.526920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.526959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.527326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.527355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.527718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.527748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.528122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.528151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.528511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.528541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.528779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.528808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.529063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.529095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.529453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.529482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.530223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.530253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.530614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.530655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.531065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.531095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.531420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.531456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.531816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.945 [2024-12-06 18:42:17.531846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.945 qpair failed and we were unable to recover it. 00:30:22.945 [2024-12-06 18:42:17.532204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.532237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.532463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.532495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.532875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.532905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.533238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.533276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.533633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.533680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.534058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.534087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.534445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.534473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.534825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.534856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.535219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.535249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.535617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.535656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.536013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.536042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.536406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.536876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.536907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.537274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.537304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.537685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.537717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.538043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.538072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.538423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.538459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.538815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.538845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.539209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.539238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.539606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.539889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.539922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.540297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.540327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.540693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.540724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.541075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.541105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.541467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.541497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.541853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.541884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.542239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.542268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.542631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.542670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.542971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.542999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.543372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.543401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.543742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.543771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.544100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.544130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.544494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.544523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.544892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.544922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.545179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.946 [2024-12-06 18:42:17.545210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.946 qpair failed and we were unable to recover it. 00:30:22.946 [2024-12-06 18:42:17.545555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.545585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.545962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.546319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.546349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.546746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.547103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.547133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.547507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.547535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.547867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.547898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.548240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.548270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.548633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.548676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.549024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.549053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.549415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.549444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.549807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.549837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.550203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.550231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.550582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.550612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.550985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.551015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.551354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.551382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.551742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.551772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.552135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.552163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.552520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.552548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.552895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.552927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.553266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.553295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.553661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.553693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.554053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.554082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.554448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.554477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.554857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.554886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.555195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.947 [2024-12-06 18:42:17.555226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.947 qpair failed and we were unable to recover it. 00:30:22.947 [2024-12-06 18:42:17.555583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.555612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.555979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.556008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.556385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.556415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.556770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.556800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.557165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.557193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.557564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.557595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.557960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.557991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.558351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.558381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.558742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.558775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.559118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.559148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.559520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.559550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.559920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.559951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.560312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.560340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.560712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.560742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.561156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.561185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.561542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.561571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.561942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.562334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.562370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.562727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.562760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.563159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.563190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.563529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.563560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.563919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.564314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.564345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.564702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.564732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.565089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.565118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.565485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.565514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.565780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.565809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.566091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.566120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.566488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.566895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.566925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.567312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.567342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.567699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.567733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.568126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.948 [2024-12-06 18:42:17.568156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.948 qpair failed and we were unable to recover it. 00:30:22.948 [2024-12-06 18:42:17.568481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.568510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.568848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.569224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.569254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.569617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.569673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.569924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.569953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.570212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.570241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.570611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.570650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.571021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.571050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.571310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.571338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.571703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.571735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.572108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.572137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.572504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.572533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.572817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.572847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.573098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.573126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.573485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.573516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.573905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.573936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.574205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.574234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.574615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.574654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.575064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.575444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.575473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.575825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.575857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.576244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.576273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.576531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.576559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.576933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.576964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.577325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.577362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.577800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.577832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.578067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.578096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.578253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.578281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.578654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.578685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.579088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.579117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.579349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.579377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.579617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.579666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.580053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.949 [2024-12-06 18:42:17.580083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.949 qpair failed and we were unable to recover it. 00:30:22.949 [2024-12-06 18:42:17.580423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.580452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.580801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.580833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.581188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.581217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.581581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.581612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.582005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.582035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.582388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.582417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.582797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.582828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.583070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.583100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.583470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.583501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.583935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.583965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.584306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.584335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.584703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.584734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.585105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.585484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.585512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.586271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.586652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.586691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.587028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.587057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.587424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.587454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.587820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.587850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.588097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.588127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.588495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.588523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.588767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.588797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.589178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.589207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.589563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.589592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.589994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.590024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.590372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.590401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.590744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.590774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.591152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.591182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.591471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.591822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.591853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.592242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.592278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.950 [2024-12-06 18:42:17.592690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.950 qpair failed and we were unable to recover it. 00:30:22.950 [2024-12-06 18:42:17.592928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.592956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.593213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.593604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.593632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.594007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.594036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.594409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.594438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.594814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.594844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.595223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.595252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.595610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.595649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.596094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.596124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.596488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.596519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.596862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.596892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.597240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.597270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.597504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.597534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.597940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.597970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.598337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.598366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.598715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.598746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.599121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.599150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.599591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.599619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.599844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.599874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.600256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.600285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.600675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.600706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.601053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.601438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.601467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.601821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.601851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.602214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.602243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.602582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.602613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.602862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.602892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.603137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.603165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.603542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.603571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.603814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.603846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.604200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.604230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.951 [2024-12-06 18:42:17.604475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.951 [2024-12-06 18:42:17.604503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.951 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.604891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.604922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.605132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.605161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.605552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.605581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.605833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.605863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.606220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.606250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.606543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.606573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.606962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.607000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.607381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.607413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.607650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.607683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.608024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.608053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.608409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.608438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.608813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.608844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.609207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.609236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.609607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.609871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.609900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.610279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.610311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.610673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.610703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.611108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.611136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.611500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.611528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.611751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.611781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.612037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.612066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.612427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.612457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.612820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.612852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.613078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.613106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.613388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.613418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.613654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.614029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.952 [2024-12-06 18:42:17.614058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.952 qpair failed and we were unable to recover it. 00:30:22.952 [2024-12-06 18:42:17.614423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.614454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.614815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.614845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.615209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.615238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.615611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.615648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.616008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.616037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.616403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.616432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.616680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.616712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.617050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.617081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.617478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.617874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.617903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.618271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.618301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.618662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.618694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.619038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.619069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.619311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.619344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.619703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.619735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.620115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.620145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.620507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.620536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.620891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.620921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.621154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.621183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.621562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.621600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.621980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.622011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.622241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.622270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.622620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.622659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.623008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.623036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.623398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.623427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.623802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.623833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.624219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.624249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.624612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.624665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.625018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.625046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.625404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.625432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.625771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.625804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.626177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.626207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.626629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.626665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.953 [2024-12-06 18:42:17.627023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.953 [2024-12-06 18:42:17.627053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.953 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.627424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.627453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.627805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.627835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.628264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.628295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.628670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.628703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.629099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.629129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.629488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.629519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.629880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.629911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.630263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.630292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.630550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.630583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.630978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.631009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.631373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.631402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.631803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.632139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.632168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.632536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.632922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.632953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.633321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.633351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.633757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.633787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.634148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.634177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.634540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.634570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.634921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.634952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.635307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.635337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.635706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.636088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.636117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.636470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.636498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.636865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.636895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.637255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.637294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.637645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.637676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.638042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.638071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.638422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.638451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.638818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.638848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.639104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.639133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.639471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.639502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.639932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.954 [2024-12-06 18:42:17.639963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.954 qpair failed and we were unable to recover it. 00:30:22.954 [2024-12-06 18:42:17.640305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.640335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.640674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.640704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.641051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.641081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.641425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.641456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.641825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.641855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.642215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.642244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.642625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.642664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.643037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.643066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.643418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.643448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.643813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.643846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.644217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.644246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.644617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.644666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.644995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.645024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.645272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.645303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.645658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.645689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.646026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.646056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.646379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.646409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.646771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.646801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.647197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.647491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.647520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.647866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.647897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.648259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.648288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.648664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.648696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.649062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.649092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.649436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.649466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.649844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.649873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.650242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.650271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.650649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.650681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.651070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.651435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.651463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.651813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.651844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.652190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.652219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.652619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.652998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.955 [2024-12-06 18:42:17.653028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.955 qpair failed and we were unable to recover it. 00:30:22.955 [2024-12-06 18:42:17.653383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.653412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.653663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.653694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.654064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.654092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.654471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.654500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.654838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.654869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.655238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.655268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.655654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.655684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.656046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.656077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.656408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.656437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.656807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.657208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.657237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.657584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.657614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.657991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.658021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.658370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.658398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.658706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.658755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.659101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.659131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.659496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.659526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.659918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.660276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.660305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.660669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.660698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.661067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.661096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.661452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.661484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.661823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.661855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.662214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.662244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.662609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.662645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.663020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.663049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.663397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.663426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.663785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.663815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.664169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.664200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.664560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.664589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.664938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.664970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.665310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.665339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.956 [2024-12-06 18:42:17.665692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.956 [2024-12-06 18:42:17.665724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.956 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.666084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.666112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.666478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.666509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.666881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.666911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.667248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.667277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.667633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.667670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.668019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.668055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.668400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.668430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.668760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.668792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.669140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.669168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.669533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.669562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.669909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.669939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.670314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.670344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.670705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.670737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.671094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.671123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.671502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.671531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.671886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.671917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.672255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.672283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.672623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.672678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.673035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.673066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.673424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.673453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.673797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.673828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.674075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.674104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.674463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.674493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.674837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.674869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.675226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.675256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.675619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.675658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.676018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.676048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.957 [2024-12-06 18:42:17.676395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.957 [2024-12-06 18:42:17.676424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.957 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.676785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.676816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.677176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.677205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.677560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.677590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.677918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.677949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.678308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.678338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.678701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.678732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.679093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.679123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.679483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.679515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.679889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.679921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.680295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.680325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.680696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.680727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.681104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.681134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.681507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.681537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.681904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.681936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.682296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.682327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.682730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.682761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.683125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.683154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.683522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.683559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.683948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.684349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.684381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.684741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.684773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.685056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.685085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.685431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.685461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.685812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.685843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.686229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.686259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.686636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.687011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.687042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.687417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.687446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.687806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.687836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.688192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.688221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.688588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.688618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.688982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.689013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.689357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.689386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.689709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.689738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.690083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.958 [2024-12-06 18:42:17.690112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.958 qpair failed and we were unable to recover it. 00:30:22.958 [2024-12-06 18:42:17.690463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.690492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.690820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.691222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.691252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.691621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.691664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.691995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.692024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.692275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.692304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.692670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.692701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.693057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.693085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.693439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.693469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.693831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.693863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.694225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.694664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.695014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.695045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.695393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.695421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.695775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.695808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.696170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.696199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.696558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.697031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.697062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.697424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.697453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.697841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.698210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.698239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.698590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.698618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.698989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.699024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.699365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.699395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.699629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.699669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.700041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.700070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.700433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.700462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.700824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.700855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.701215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.701244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.701609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.701652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.702001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.702030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.702396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.702427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.702794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.702825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.959 [2024-12-06 18:42:17.703184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.959 [2024-12-06 18:42:17.703213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.959 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.703571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.703601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.703964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.703993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.704357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.704388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.705012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.705041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.705388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.705427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.705756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.706126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.706157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.706517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.706547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.706890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.706922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.707330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.707359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.707614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.707654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:22.960 [2024-12-06 18:42:17.708023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.960 [2024-12-06 18:42:17.708052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:22.960 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.708410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.708441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.708799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.708830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.709191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.709222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.709581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.709611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.709998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.710029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.710294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.710326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.710668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.710700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-12-06 18:42:17.711071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-12-06 18:42:17.711103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.711441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.711472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.711832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.711863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.712227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.712257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.712711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.713061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.713092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.713448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.713479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.713834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.713865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.714230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.714259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.714693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.714723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.715097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.715442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.715473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.715826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.715857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.716201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.716230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.716585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.716614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.716980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.717009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.717348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.717377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.717743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.717774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.718139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.718515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.718543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.718872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.718904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.719234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.719263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.719621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.719662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.719994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.720023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.720393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.720423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.720792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.720824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.721245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.721274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.721661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.721691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.722123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.722153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.722514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.722545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.722906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.722938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.723196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.723225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.723573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.723601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.723974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.724004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.724448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.724478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.724876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.724918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.725271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.725301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-12-06 18:42:17.725560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-12-06 18:42:17.725589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.725979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.726009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.726364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.726392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.726752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.726782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.727120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.727150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.727508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.727537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.727874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.727905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.728156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.728186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.728553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.728582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.728954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.728983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.729248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.729276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.729627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.729682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.730040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.730070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.730427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.730457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.730835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.730869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.731267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.731296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.731657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.731690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.732051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.732080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.732467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.732829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.732859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.733219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.733247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.733618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.733661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.734005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.734034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.734437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.734466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.734843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.735218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.735248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.735610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.735648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.735907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.735935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.736302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.736334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.736700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.736732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.737102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.737131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.737476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.737504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.737889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.737918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.738243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.738272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.738630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.738675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.738900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.738929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-12-06 18:42:17.739288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-12-06 18:42:17.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.739686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.739716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.740098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.740133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.740487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.740516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.740910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.740943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.741290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.741318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.741685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.741715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.742098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.742127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.742392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.742420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.742692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.742725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.743114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.743144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.743486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.743517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.743876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.743906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.744287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.744315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.744672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.744703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.745037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.745067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.745434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.745465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.745823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.745855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.746233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.746263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.746624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.746669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.747084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.747114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.747477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.747506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.747873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.747904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.748195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.748223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.748483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.748512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.748939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.748969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.749310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.749338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.749755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.749787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.750168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.750198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.750564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.750593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.750983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.751013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.751332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.751360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.751768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.751801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.752146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.752174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.752528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.752557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.752893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.752924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.753298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-12-06 18:42:17.753727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-12-06 18:42:17.753757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.754116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.754582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.754612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.755025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.755055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.755426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.755454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.755795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.755831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.756159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.756189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.756430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.756460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.756835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.757195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.757225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.757586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.757616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.758025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.758054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.758397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.758425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.758785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.758817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.759165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.759194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.759554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.759955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.759985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.760335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.760363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.760781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.761147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.761177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.761428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.761459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.761827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.761856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.762215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.762244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.762684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.762729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.762994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.763022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.763372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.763403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.763770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.763802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.764239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.764267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.764607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.764650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.765011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.765040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.765403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.765434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.765799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.765829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.766180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-12-06 18:42:17.766211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-12-06 18:42:17.766572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.766602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.766996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.767029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.767267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.767299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.767660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.767693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.768053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.768082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.768327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.768355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.768705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.768735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.769107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.769135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.769497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.769529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.769869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.769900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.770282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.770311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.770664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.770694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.771087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.771442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.771471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.771843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.771874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.772246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.772277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.772663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.772694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.773071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.773101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.773453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.773482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.773928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.773959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.774398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.774428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.774758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.774789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.775146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.775174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.775536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.775567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.775992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.776022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.776387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.776428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.776783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.776814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.777177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.777205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.777462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.777493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.777866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.777897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.778267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.778296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-12-06 18:42:17.778627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-12-06 18:42:17.778669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.778894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.778927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.779283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.779313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.779677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.779708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.780092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.780121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.780482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.780510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.780836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.781211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.781240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.781605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.782060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.782089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.782449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.782479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.782848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.782879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.783122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.783153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.783491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.783521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.783870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.783900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.784264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.784293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.784663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.784694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.785059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.785089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.785447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.785478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.785702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.785734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.786007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.786035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.786301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.786658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.786690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.787014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.787044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.787398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.787429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.787783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.787813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.788173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.788571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.788600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.788966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.788995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.789372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.789401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.789634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.789677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.790063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.790094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.790450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.790480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.790951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.790982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.791327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.791355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.791732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.791763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.792165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.792194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.792451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-12-06 18:42:17.792479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-12-06 18:42:17.792828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.792860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.793207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.793237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.793601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.793630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.793984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.794014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.794412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.794666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.794700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.795053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.795082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.795437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.795465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.795709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.795742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.796114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.796142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.796501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.796533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.796895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.796926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.797166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.797200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.797576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2325164 Killed "${NVMF_APP[@]}" "$@" 00:30:23.240 [2024-12-06 18:42:17.798033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.798063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.798419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.798448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:23.240 [2024-12-06 18:42:17.798887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:23.240 [2024-12-06 18:42:17.798918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.240 [2024-12-06 18:42:17.799273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.799303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.240 [2024-12-06 18:42:17.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.799708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.240 [2024-12-06 18:42:17.800066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.800096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.800453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.800482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.800728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.800757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.801119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.801149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.801491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.801524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.801885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.801916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.802296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.802325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.802693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.802723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.803080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.803113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.803463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.803494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.803884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.803916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.804288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.804320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.804680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.804712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.805123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.805151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.805486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.805516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.805772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.805802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-12-06 18:42:17.806185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-12-06 18:42:17.806215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.806587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.806616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.807009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.807039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.807382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.807411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.807759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.807789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2326140 00:30:23.241 [2024-12-06 18:42:17.808171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2326140 00:30:23.241 [2024-12-06 18:42:17.808565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.808597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2326140 ']' 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:23.241 [2024-12-06 18:42:17.808886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.808917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.241 [2024-12-06 18:42:17.809154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.809185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.241 [2024-12-06 18:42:17.809434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.809471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.241 [2024-12-06 18:42:17.809843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.809878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 18:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:23.241 [2024-12-06 18:42:17.810232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.810264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.810618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.810667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.811063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.811426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.811457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.811829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.811862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.812231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.812262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.812623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.812668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.813060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.813092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.813259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.813291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.813726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.813761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.814148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.814178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.814674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.815039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.815070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.815461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.815490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.815842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.815873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.816252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.816620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.816662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.817049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.817080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.817322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.817354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.817750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.818124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.818155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.818524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.818553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-12-06 18:42:17.818946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-12-06 18:42:17.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.819374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.819717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.819749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.820163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.820193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.820554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.820585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.820956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.821357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.821386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.821661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.821693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.822069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.822098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.822470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.822501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.822944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.822974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.823317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.823345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.823496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.823526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.823978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.824011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.824384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.824415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.824712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.824743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.825036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.825065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.825451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.825480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.825825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.825854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.826208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.826238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.826596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.826627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.826905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.826936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.827300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.827334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.827695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.827726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.828092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.828123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.828480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.828511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.828911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.828942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.829314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.829344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.829710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.829742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.830187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.830217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.830574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.830605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.831002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.831035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.831269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.831299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.831698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.831731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.832077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.832105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.832492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.832522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.833010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.833043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.833385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.833413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-12-06 18:42:17.833676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-12-06 18:42:17.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.834115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.834145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.834498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.834528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.834917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.834953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.835347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.835379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.835674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.835705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.836076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.836106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.836459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.836489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.836914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.836947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.837326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.837357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.837714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.837745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.838133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.838165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.838346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.838377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.838752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.838782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.839129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.839159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.839395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.839426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.839759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.839790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.840132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.840164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.840515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.840545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.840898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.840930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.841268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.841299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.841560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.841589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.842058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.842088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.842354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.842763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.842793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.843145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.843178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.843545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.843575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.843982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.844012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.844358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.844391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.844568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.844598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.845004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.845035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.845402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.845433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.845802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.845833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-12-06 18:42:17.846208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-12-06 18:42:17.846240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.846595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.846624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.847043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.847074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.847490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.847519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.847919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.847950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.848229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.848489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.848521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.848789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.848819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.849193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.849222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.849611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.849653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.849922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.849958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.850211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.850240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.850578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.850608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.850956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.850988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.851348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.851379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.851792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.851824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.852197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.852227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.852597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.852629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.852956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.852986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.853207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.853236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.853599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.853631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.854052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.854083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.854437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.854467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.854741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.854772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.855136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.855166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.855510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.855540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.855777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.855808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.856132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.856162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.856512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.856541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.856913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.856943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.857296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.857325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.857569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.857598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.857872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.857907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.858275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.858305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.858677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.858709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.859086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.859116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.859454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.859482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.859893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.859926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-12-06 18:42:17.860286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-12-06 18:42:17.860314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.860677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.860708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.861046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.861074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.861418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.861447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.861788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.861821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.862054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.862084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.862343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.862371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.862744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.862775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.863154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.863183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.863549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.863579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.863927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.864328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.864359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.864733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.864771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.865058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.865086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.865328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.865356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.865740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.865771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.866170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.866518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.866549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.866921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.866951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.867288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.867317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.867567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.867596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.867961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.867991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.868326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.868355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.868673] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:30:23.245 [2024-12-06 18:42:17.868721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.868737] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.245 [2024-12-06 18:42:17.868752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.869111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.869435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.869697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.869726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.870104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.870133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.870501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.870533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.870791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.870821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.871114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.871143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.871399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.871428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.871779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.871810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.872179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.872209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.872448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.872477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.872858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.872890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.873291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-12-06 18:42:17.873322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-12-06 18:42:17.873536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.873567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.873976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.874009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.874364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.874393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.874625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.874689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.875045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.875075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.875445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.875476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.875677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.876107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.876138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.876509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.876539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.876818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.876849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.877204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.877234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.877600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.877632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.878056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.878087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.878443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.878472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.878818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.878855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.879232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.879263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.879513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.879545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.879767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.879799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.880175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.880206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.880572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.880603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.880999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.881031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.881390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.881421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.881777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.881809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.882182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.882212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.882573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.882604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.882884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.882916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.883164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.883194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.883479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.883508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.883631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.883677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.883947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.883979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.884346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.884378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.884743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.884776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.885143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.885174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.885551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.885581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.885802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.885833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.886093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.886123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.886480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.886510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.886863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-12-06 18:42:17.886895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-12-06 18:42:17.887234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.887264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.887630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.888013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.888044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.888425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.888454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.888853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.888884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.889245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.889274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.889614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.889657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.889992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.890020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.890268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.890301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.890666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.890699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.891050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.891078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.891438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.891467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.891804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.891836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.892193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.892224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.892581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.892611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.892949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.892978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.893428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.893464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.893816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.893848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.894090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.894119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.894464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.894493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.894751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.894784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.895150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.895179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.895535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.895564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.896012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.896044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.896398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.896427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.896792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.896822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.897189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.897218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.897585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.897614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.898081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.898112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.898456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.898487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.898832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.898864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.899122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.899153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.899477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.899506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-12-06 18:42:17.899883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-12-06 18:42:17.899914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.900336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.900365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.900720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.900749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.901103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.901132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.901489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.901518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.901769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.901801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.902058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.902088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.902319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.902351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.902773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.902804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.903176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.903208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.903569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.903600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.903838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.903870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.904262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.904291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.904696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.904728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.905077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.905106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.905467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.905497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.905871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.905902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.906259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.906288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.906533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.906565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.906933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.906965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.907327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.907358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.907719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.907751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.908190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.908219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.908578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.908614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.909018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.909049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.909469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.909838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.909868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.910245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.910274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.910655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.910687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.911043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.911071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.911452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.911481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.911794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.911824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.912219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.912250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.912609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.912659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.913060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.913423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.913451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.913874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.913906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.914295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.914325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-12-06 18:42:17.914687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-12-06 18:42:17.914718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.915087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.915117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.915452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.915480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.915819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.915851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.916215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.916244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.916607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.916656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.916882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.916916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.917288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.917318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.917685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.917715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.918163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.918192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.918542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.918571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.918807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.918840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.919202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.919233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.919594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.919622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.920011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.920042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.920405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.920435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.920769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.920800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.921168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.921197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.921567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.921598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.921974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.922246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.922277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.922659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.923057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.923086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.923445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.923477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.923714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.923748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.924122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.924158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.924513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.924542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.924938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.924970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.925332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.925360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.925703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.925735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.926091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.926121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.926487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.926516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.926867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.926900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.927266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.927297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.927659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.927689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.928034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.928064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.928425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.928454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.928816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-12-06 18:42:17.928849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-12-06 18:42:17.929207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.929236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.929628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.929998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.930028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.930369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.930400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.930655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.930688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.931049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.931079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.931437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.931466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.931818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.931848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.932216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.932245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.932598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.932628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.933048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.933078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.933447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.933477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.933848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.933878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.934320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.934598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.934628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.934984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.935014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.935379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.935408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.935784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.935817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.936167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.936196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.936563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.936592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.936843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.936876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.937281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.937311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.937558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.937589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.937951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.937983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.938344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.938374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.938734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.938765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.939047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.939080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.939422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.939459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.939800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.939839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.940201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.940230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.940600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.940630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.941017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.941046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.941404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.941434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.941799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.941829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.942197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.942226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.942590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.942619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.942931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.942961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.943310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-12-06 18:42:17.943340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-12-06 18:42:17.943715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.943746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.944116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.944145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.944432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.944460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.944752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.944783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.945140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.945169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.945531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.945562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.945910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.945940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.946351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.946719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.946749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.947103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.947133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.947407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.947436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.947785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.947815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.948190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.948220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.948479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.948508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.948897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.948928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.949299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.949743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.949774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.950086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.950116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.950447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.950476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.950838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.950870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.951238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.951269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.951624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.951665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.951931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.951960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.952305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.952335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.952717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.952748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.953113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.953142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.953489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.953518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.953904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.953934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.954311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.954339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.954685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.954723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.955113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.955142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.955528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.955890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.955920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.956186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.956216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.956566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.956595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.957001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.957032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.957293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.957322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-12-06 18:42:17.957609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-12-06 18:42:17.957661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.958054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.958085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.958354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.958383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.958732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.958763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.959134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.959165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.959539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.959569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.959947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.959977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.960337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.960368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.960776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.960807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.961173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.961203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.961568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.961962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.961993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.962362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.962390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.962769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.962801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.963165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.963195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.963628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.963667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.964023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.964052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.964405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.964436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.964694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.964727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.965176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.965207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.965560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.965589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.965972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.966003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.966352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.966381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.966816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.966847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.967200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.967229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.967578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.967606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.968037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.968068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.968298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.968328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.968527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.252 [2024-12-06 18:42:17.968696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.968728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.968998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.969030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.969393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.969422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.969797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-12-06 18:42:17.969828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-12-06 18:42:17.970254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.970284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.970618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.970662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.971041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.971072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.971443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.971474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.971869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.971901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.972239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.972268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.972625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.972667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.973009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.973039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.973408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.973438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.973811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.974106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.974135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.974489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.974518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.974966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.974996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.975335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.975371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.975736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.975767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.976137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.976165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.976520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.976557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.976900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.976930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.977292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.977320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.977686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.977717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.978096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.978125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.978488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.978516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.978897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.978926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.979168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.979197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.979534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.979564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.979944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.979975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.980350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.980380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.980735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.980765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.981139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.981168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.981542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.981572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.981931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.981962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.982197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.982227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.982577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.983004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.983036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.983196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.983224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.983560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.983589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.983965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.983995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-12-06 18:42:17.984362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-12-06 18:42:17.984392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.984657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.984688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.985092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.985121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.985490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.985522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.985889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.985921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.986291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.986320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.986577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.986607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.986870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.986901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.987149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.987177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.987446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.987477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.987862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.987891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.988241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.988272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.988631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.988671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.988918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.988946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.989304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.989333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.989699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.989730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.990087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.990123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.990486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.990514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.990861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.990891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.991225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.991254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.991525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.991553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.991902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.991932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.992271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.992302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.992668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.992697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.993102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.993131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.993473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.993502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.993858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.993888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.994243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.994272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.994635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.994674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.995017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.995054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.995420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.995449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.995815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.995845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.996198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.996226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.996603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.996631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.997020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.997414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.997442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.997792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.997823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.998195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-12-06 18:42:17.998224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-12-06 18:42:17.998591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:17.998618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:17.999018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:17.999048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:17.999400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:17.999428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:17.999780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:17.999810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:17.999924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:17.999952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.000227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.000625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.000664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.001060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.001089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.001455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.001485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.001822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.001853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.002218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.002247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.002467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.002495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.002903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.002933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.003182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.003212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.003576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.003605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.003983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.004015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.004376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.004405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.004758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-12-06 18:42:18.004788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-12-06 18:42:18.005124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.005161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.005410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.005443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.005853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.006223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.006252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.006553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.006582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.006937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.006967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.007325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.007353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.007720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.007750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.008151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.008183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.008531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.008559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.008879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.008909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.009271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.009300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.009536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.009564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.009934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.009966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.010322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.010352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.529 [2024-12-06 18:42:18.010710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.529 [2024-12-06 18:42:18.010739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.529 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.011086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.011117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.011452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.011482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.011833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.011863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.012216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.012246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.012614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.012651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.013008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.013036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.013399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.013428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.013804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.013836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.014198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.014228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.014609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.014648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.015025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.015055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.015393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.015423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.015789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.015819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.016071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.016101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.016456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.016486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.016708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.016738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.017112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.017142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.017507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.017536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.017896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.017928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.018325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.018694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.018723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.019072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.019102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.019498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.019528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.019889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.019920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.020293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.020341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.020696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.020726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.021091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.021121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 [2024-12-06 18:42:18.021109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.021155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.530 [2024-12-06 18:42:18.021165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.530 [2024-12-06 18:42:18.021172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.530 [2024-12-06 18:42:18.021179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.530 [2024-12-06 18:42:18.021376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.021404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.021770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.021802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.022166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.022195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.022560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.022588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.022951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.022982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.023190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:23.530 [2024-12-06 18:42:18.023350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.023378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.023351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:23.530 [2024-12-06 18:42:18.023529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:23.530 [2024-12-06 18:42:18.023646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.530 [2024-12-06 18:42:18.023529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:23.530 [2024-12-06 18:42:18.023677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.530 qpair failed and we were unable to recover it. 00:30:23.530 [2024-12-06 18:42:18.024064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.024093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.024483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.024512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.024899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.024928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.025269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.025298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.025679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.025710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.026047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.026077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.026415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.026444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.026743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.026772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.027120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.027148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.027508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.027538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.027902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.027932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.028291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.028320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.028684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.028714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.029099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.029128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.029451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.029481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.029860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.029891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.030267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.030609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.030647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.030906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.030936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.031300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.031329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.031572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.031601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.031868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.031899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.032278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.032566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.032596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.032838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.032867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.033242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.033272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.033547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.033576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.033824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.033863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.034234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.034264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.034504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.034532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.034951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.034981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.035331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.035362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.035511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.035541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.035923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.035954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.036194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.036222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.036469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.036868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.036897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.531 [2024-12-06 18:42:18.037240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.531 [2024-12-06 18:42:18.037270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.531 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.037632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.037673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.038030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.038059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.038389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.038418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.038724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.038754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.039019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.039048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.039423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.039453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.039818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.039847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.040205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.040235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.040594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.040623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.040786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.040814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.041201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.041230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.041599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.041628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.042027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.042057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.042415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.042444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.042800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.042831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.043184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.043215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.043579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.043609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.043972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.044002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.044366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.044394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.044768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.044798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.045010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.045038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.045416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.045445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.045705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.045739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.045868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.045897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.046274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.046303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.046671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.046703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.047084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.047113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.047490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.047520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.047765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.047795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.048148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.048184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.048531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.048562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.048791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.048821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.049168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.049197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.049568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.049598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.049846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.049877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.050011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.050039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.050315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.050346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.532 [2024-12-06 18:42:18.050701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.532 [2024-12-06 18:42:18.050733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.532 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.050960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.050989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.051381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.051411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.051778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.051808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.052069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.052098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.052475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.052506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.052863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.052893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.053104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.053132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.053382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.053411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.053624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.053676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.054011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.054042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.054400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.054430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.054799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.055184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.055214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.055569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.055598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.055963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.055994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.056351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.056380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.056594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.056622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.057015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.057044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.057276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.057305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.057520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.057550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.057805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.057836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.058209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.058239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.058610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.058649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.058925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.058954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.059202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.059231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.059581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.059610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.059973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.060003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.060386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.060415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.060759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.060791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.061021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.061050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.061445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.061475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.061728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.061767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.062131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.062161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.062536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.062566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.062927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.062959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.063198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.063227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.533 [2024-12-06 18:42:18.063573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.533 [2024-12-06 18:42:18.063604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.533 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.063980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.064012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.064262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.064290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.064659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.064690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.065037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.065067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.065422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.065499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.065721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.065753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.066024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.066056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.066287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.066317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.066569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.066850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.066883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.067183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.067212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.067425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.067455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.067813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.067843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.068130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.068159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.068552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.068583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.068817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.068851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.069216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.069245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.069466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.069499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.069746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.069776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.070112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.070143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.070519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.070548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.070774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.070804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.071056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.071085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.071312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.071340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.071614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.071653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.071900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.071929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.072279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.072310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.072554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.072587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.072960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.072992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.073198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.073226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.073594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.534 [2024-12-06 18:42:18.073624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.534 qpair failed and we were unable to recover it. 00:30:23.534 [2024-12-06 18:42:18.073939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.073971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.074184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.074214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.074425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.074455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.074838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.074877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.075126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.075155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.075393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.075422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.075802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.075833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.076202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.076231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.076602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.076632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.076862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.076893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.077126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.077158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.077524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.077554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.077763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.077793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.078173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.078203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.078541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.078572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.078923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.078954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.079318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.079612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.079653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.079874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.079903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.080035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.080069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.080310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.080339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.080704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.080736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.080983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.081012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.081337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.081368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.081729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.081760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.082117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.082154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.082515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.082544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.082925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.082958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 [2024-12-06 18:42:18.083052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.535 [2024-12-06 18:42:18.083080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.535 qpair failed and we were unable to recover it. 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Write completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.535 starting I/O failed 00:30:23.535 Read completed with error (sct=0, sc=8) 00:30:23.536 starting I/O failed 00:30:23.536 Read completed with error (sct=0, sc=8) 00:30:23.536 starting I/O failed 00:30:23.536 Read completed with error (sct=0, sc=8) 00:30:23.536 starting I/O failed 00:30:23.536 [2024-12-06 18:42:18.083945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.536 [2024-12-06 18:42:18.084323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.084892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.084995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.085450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.085488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.085883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.085982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.086296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.086667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.086700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.086934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.086964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.087364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.087393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.087692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.087723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.088052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.088082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.088395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.088426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.088633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.088680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.088921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.088949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.089195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.089224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.089570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.089599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.089986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.090016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.090247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.090277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.090483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.090514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.090930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.090959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.091324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.091353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.091719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.091750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.092003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.092032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.092269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.092298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.092664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.092695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.093054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.093083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.093317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.093346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.093554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.093582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.093996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.094027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.094279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.094307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.094664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.094694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.094831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.094861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.095243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.095273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.095626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.095668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.096016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.096045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.096473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.096508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.096867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.536 [2024-12-06 18:42:18.096899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.536 qpair failed and we were unable to recover it. 00:30:23.536 [2024-12-06 18:42:18.097172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.097201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.097567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.097997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.098027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.098275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.098304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.098633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.098672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.098922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.098951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.099077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.099110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.099459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.099489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.099931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.099961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.100310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.100340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.100600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.100977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.101008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.101402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.101434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.101770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.101801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.102189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.102218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.102582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.102610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.102987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.103018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.103384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.103413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.103798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.103831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.104039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.104069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.104346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.104374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.104729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.104759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.105130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.105160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.105425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.105457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.105814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.105846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.106071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.106101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.106494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.106523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.106870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.106901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.107252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.107281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.107663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.107694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.107953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.107981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.108238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.108265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.108965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.108996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.109366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.109395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.109774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.109805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.110069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.110098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.110347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.537 [2024-12-06 18:42:18.110375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.537 qpair failed and we were unable to recover it. 00:30:23.537 [2024-12-06 18:42:18.110718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.110756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.110972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.111001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.111389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.111764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.111797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.112160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.112190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.112550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.112579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.112951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.112981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.113344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.113374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.113536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.113566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.113905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.113938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.114283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.114312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.114684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.114713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.114977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.115006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.115277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.115309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.115667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.115697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.116045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.116441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.116472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.116684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.116714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.116987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.117016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.117417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.117448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.117795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.117825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.118193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.118222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.118495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.118523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.118918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.118949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.119320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.119350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.119718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.119748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.120126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.120156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.120491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.120520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.120851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.120881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.121234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.121264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.121649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.121680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.122058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.122087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.122443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.538 [2024-12-06 18:42:18.122473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.538 qpair failed and we were unable to recover it. 00:30:23.538 [2024-12-06 18:42:18.122744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.122774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.123144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.123173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.123540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.123569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.123931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.123969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.124229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.124258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.124618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.124658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.124911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.125310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.125345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.125671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.125701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.125806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.125836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.126190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.126219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.126593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.126623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.126912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.126942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.127323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.127352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.127718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.127749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.128115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.128344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.128373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.128626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.128665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.129050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.129080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.129458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.129488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.129727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.129758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.130120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.130149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.130503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.130532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.130888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.131295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.131324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.131698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.131729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.132098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.132126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.132468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.132498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.132854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.132884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.133255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.133283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.133398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.133429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.133777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.133807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.134176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.134206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.134429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.134457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.134814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.134847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.135208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.135237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.135599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.135627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.539 [2024-12-06 18:42:18.135968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.539 [2024-12-06 18:42:18.135997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.539 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.136360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.136389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.136756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.136787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.137160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.137189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.137549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.137578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.137931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.137960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.138327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.138357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.138709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.138739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.139087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.139115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.139483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.139512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.139773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.139810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.140148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.140179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.140529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.140559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.140903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.140933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.141305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.141334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.141677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.141706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.142089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.142118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.142349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.142378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.142719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.142748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.143122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.143150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.143607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.143645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.144012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.144040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.144397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.144426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.144680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.144713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.145091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.145122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.145475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.145792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.145822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.146162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.146191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.146564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.146593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.146867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.146897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.147259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.147288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.147647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.147678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.148060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.148089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.148330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.148359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.148709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.148739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.149110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.149139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.149368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.149806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.149836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.540 qpair failed and we were unable to recover it. 00:30:23.540 [2024-12-06 18:42:18.150188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.540 [2024-12-06 18:42:18.150216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.150591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.150619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.150847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.150878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.151228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.151265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.151649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.151679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.152045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.152074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.152433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.152462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.152855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.152885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.153249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.153279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.153532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.153560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.153970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.153999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.154379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.154475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.154511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.154855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.154884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.155256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.155285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.155740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.155770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.156139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.156169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.156521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.156550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.156809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.156839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.157192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.157222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.157575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.157605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.157862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.157892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.158117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.158147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.158512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.158541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.158894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.158924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.159311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.159678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.159708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.160075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.160106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.160328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.160356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.160598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.160626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.161020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.161049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.161391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.161421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.161651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.161681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.162043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.162403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.162432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.162843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.162872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.162973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.163001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.163630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.541 [2024-12-06 18:42:18.163752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.541 qpair failed and we were unable to recover it. 00:30:23.541 [2024-12-06 18:42:18.164190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.164229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.164613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.164655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.165101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.165422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.165918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.166024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.166344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.166382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.166757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.166789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.167180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.167559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.167589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.167826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.167856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.168081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.168109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.168450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.168479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.168721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.168757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.169137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.169166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.169550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.169603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.169987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.170018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.170332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.170361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.170729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.170760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.171143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.171172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.171532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.171562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.171857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.171887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.172306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.172334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.172684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.172713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.173091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.173120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.173524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.173553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.173948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.173978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.174410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.174439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.174799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.174830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.174978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.175008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.175386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.175414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.175746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.175778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.176106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.176134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.176510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.176540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.176885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.176916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.177121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.177149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.177515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.177544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.177900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.177931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.178286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.542 [2024-12-06 18:42:18.178316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.542 qpair failed and we were unable to recover it. 00:30:23.542 [2024-12-06 18:42:18.178537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.178566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.178831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.178862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.179153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.179183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.179427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.179458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.179756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.179785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.180002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.180030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.180260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.180289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.180659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.180688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.181030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.181059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.181423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.181452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.181824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.181853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.182221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.182250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.182471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.182499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.182868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.182899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.183244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.183273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.183653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.183683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.184047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.184082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.184430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.184458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.184826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.184856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.185227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.185255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.185614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.185678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.185999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.186028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.186396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.186425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.186662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.186692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.187069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.187097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.187241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.187269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.187609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.187663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.188017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.188046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.188273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.188301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.188672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.188702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.189077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.189106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.189471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.189500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.189879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.189908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.543 qpair failed and we were unable to recover it. 00:30:23.543 [2024-12-06 18:42:18.190287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.543 [2024-12-06 18:42:18.190315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.190698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.190727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.190958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.190987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.191367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.191395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.191609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.191645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.192017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.192045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.192425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.192454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.192794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.192824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.193176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.193205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.193570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.193598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.193961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.193992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.194324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.194352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.194730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.194759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.195120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.195148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.195381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.195409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.195815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.195845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.196213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.196241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.196625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.196664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.197002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.197031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.197390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.197419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.197780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.197810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.198168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.198198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.198558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.198587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.198953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.199378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.199752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.200161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.200189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.200543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.200572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.200903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.200933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.201310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.201339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.201592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.201622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.202008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.202037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.202341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.202369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.202714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.202744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.203085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.203115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.203350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.203380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.203611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.203647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.204027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.544 [2024-12-06 18:42:18.204058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.544 qpair failed and we were unable to recover it. 00:30:23.544 [2024-12-06 18:42:18.204306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.204334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.204675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.204947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.204976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.205347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.205376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.205600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.205632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.205860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.205889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.206232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.206261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.206681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.206711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.206807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.206835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.207219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.207248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.207614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.207664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.207995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.208024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.208399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.208428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.208694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.208724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.208967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.208995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.209334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.209363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.209586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.209614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.210014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.210042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.210411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.210440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.210811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.210841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.211080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.211111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.211466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.211496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.211808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.212202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.212230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.212612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.212650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.212856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.212893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.213123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.213151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.213525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.213554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.213793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.213823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.214190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.214219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.214571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.214600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.214942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.214971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.215343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.215371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.215596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.215625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.216000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.216029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.216401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.216430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.216815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.216845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.217198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.545 [2024-12-06 18:42:18.217226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.545 qpair failed and we were unable to recover it. 00:30:23.545 [2024-12-06 18:42:18.217459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.217487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.217882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.217912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.218296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.218325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.218674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.218703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.218918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.218948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.219310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.219339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.219725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.219755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.220158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.220189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.220523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.220552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.220904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.220936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.221320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.221349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.221701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.221730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.221976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.222009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.222213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.222242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.222736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.222831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.223195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.223228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.223578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.223611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.224089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.224195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.224586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.224618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.224870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.224900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.225272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.225302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.225615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.225652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.225970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.226000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.226377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.226406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.226770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.226801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.227179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.227207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.227568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.227596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.227962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.227999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.228220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.228248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.228387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.228415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.228747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.228777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.229148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.229178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.229441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.229469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.229804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.229835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.229961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.229989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.230366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.230395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.230752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.230784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.546 [2024-12-06 18:42:18.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.546 qpair failed and we were unable to recover it. 00:30:23.546 [2024-12-06 18:42:18.231522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.231551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.231894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.231923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.232160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.232538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.232567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.232933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.232962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.233321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.233351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.233604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.233633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.233993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.234023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.234409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.234439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.234678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.234709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.235060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.235089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.235453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.235482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.235864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.235893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.236239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.236269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.236628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.236665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.236917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.236945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.237275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.237305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.237668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.237699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.237920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.237948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.238176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.238205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.238578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.238606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.238918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.238950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.239318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.239347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.239791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.239822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.240233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.240263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.240628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.240665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.241016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.241044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.241394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.241424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.241797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.241827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.242177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.242206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.242561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.242591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.242810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.242839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.243163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.243565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.547 [2024-12-06 18:42:18.243594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.547 qpair failed and we were unable to recover it. 00:30:23.547 [2024-12-06 18:42:18.243826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.243857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.244099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.244127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.244490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.244520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.244862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.244893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.245213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.245576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.245605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.245891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.245921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.246139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.246167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.246512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.246541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.246900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.246930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.247298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.247327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.247606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.247634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.248015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.248044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.248396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.248425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.248631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.248672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.249051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.249409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.249438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.249798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.249828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.250213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.250242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.250614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.250650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.250976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.251005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.251364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.251392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.251762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.251798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.252206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.252234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.252584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.252612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.252975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.253005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.253370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.253634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.253674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.254054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.254083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.254454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.254819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.254848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.255194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.255223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.255580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.255608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.255971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.256000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.256230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.256259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.256625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.256661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.257012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.548 [2024-12-06 18:42:18.257041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.548 qpair failed and we were unable to recover it. 00:30:23.548 [2024-12-06 18:42:18.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.257423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.257810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.257839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.258051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.258080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.258336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.258368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.258812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.258843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.259192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.259221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.259573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.259601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.259979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.260009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.260349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.260377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.260763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.260793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.261051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.261079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.261419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.261448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.261693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.261722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.262138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.262167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.262515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.262544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.262924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.262954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.263335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.263365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.263585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.263613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.264007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.264036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.264393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.264423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.264648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.265022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.265052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.265431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.265459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.265858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.266308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.266337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.266461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.266780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.266810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.267155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.267184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.267529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.267558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.267932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.267962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.268335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.268364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.268601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.268630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.269029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.269057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.269421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.269450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.269795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.270159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.270188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.270548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.270577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.549 [2024-12-06 18:42:18.270751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.549 [2024-12-06 18:42:18.270782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.549 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.271140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.271169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.271529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.271559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.271916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.271947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.272300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.272330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.272660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.272689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.273059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.273087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.273462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.273740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.273769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.274044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.274074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.274438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.274466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.274729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.274758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.274976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.275005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.275390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.275418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.275804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.275835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.276186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.276216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.276435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.276463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.276859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.276890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.277248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.277277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.277653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.277683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.277933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.277967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.278327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.278355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.278718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.278750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.279085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.279115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.279350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.279378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.279771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.279800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.280172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.280202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.280595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.280625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.280842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.280878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.281210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.281239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.281610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.281907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.281935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.282310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.282339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.282575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.282608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.283034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.283065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.283407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.283436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.283808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.283838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.284217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.284245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.550 qpair failed and we were unable to recover it. 00:30:23.550 [2024-12-06 18:42:18.284619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.550 [2024-12-06 18:42:18.284657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.284879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.284907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.285277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.285306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.285631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.285685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.286041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.286070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.286300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.286328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.286557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.286590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.286985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.287367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.287397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.287761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.287791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.288158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.288186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.288433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.288466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.288859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.288888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.289293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.289322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.289674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.289703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.290026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.290055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.290417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.290446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.290699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.290733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.290958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.290987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.291354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.291383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.291714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.292070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.292099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.292478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.292507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.292858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.292887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.293117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.293146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.293487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.293515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.293755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.293787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.294146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.294175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.294580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.294990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.295020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.295360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.295398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.295618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.295657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.295874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.295902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.296256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.296286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.296662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.296693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.297030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.297059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.297443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.297472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.297825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.297854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.551 qpair failed and we were unable to recover it. 00:30:23.551 [2024-12-06 18:42:18.298110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.551 [2024-12-06 18:42:18.298142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.298386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.298414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.298797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.298827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.299201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.299230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.299475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.299503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.299896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.299926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.552 [2024-12-06 18:42:18.300188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.552 [2024-12-06 18:42:18.300219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.552 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.300570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.300602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.300997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.301030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.301264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.301297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.301577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.301606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.301975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.302005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.302371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.302400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.302760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.302790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.303192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.303220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.303432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.303460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.303822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.303852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.304181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.304211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.304563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.304593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.304816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.304846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.305070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.305101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.305448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.305479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.305819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.305848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.306232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.306260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.306624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.306663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.306959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.306988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.307381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.307409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.307765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.307796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.308176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.308205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.308444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.308475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.308816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.308846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.309105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.309135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.309488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.309524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.309886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.309917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.310308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.310678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.310710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.310947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.310976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.311345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.832 [2024-12-06 18:42:18.311374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.832 qpair failed and we were unable to recover it. 00:30:23.832 [2024-12-06 18:42:18.311614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.311652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.311951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.311980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.312188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.312217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.312569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.312599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.313032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.313256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.313288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.313673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.313704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.314043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.314072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.314327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.314356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.314597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.314628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.315069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.315360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.315390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.315620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.315656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.316015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.316044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.316386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.316414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.316686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.316716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.317061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.317090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.317473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.317502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.317929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.317959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.318328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.318357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.318609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.318655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.318910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.318939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.319309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.319338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.319579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.319607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.319865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.319897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.320268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.320298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.320606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.320636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.321002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.321031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.321271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.321300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.321416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.321449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.321797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.321829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.322048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.322076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.322431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.322459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.322763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.322792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.323105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.323141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.323481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.323511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.323854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.323885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.833 [2024-12-06 18:42:18.324245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.833 [2024-12-06 18:42:18.324276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.833 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.324634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.324674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.324924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.324953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.325300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.325329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.325689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.325720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.326085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.326114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.326327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.326357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.326577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.326606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.326980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.327010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.327420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.327450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.327678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.327707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.327953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.327982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.328229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.328259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.328506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.328786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.328817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.329195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.329226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.329492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.329818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.329850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.330059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.330089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.330423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.330452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.330811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.330841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.331174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.331204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.331456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.331485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.331877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.331907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.332312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.332657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.332689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.333024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.333054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.333304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.333334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.333559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.333587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.333980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.334011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.334370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.334400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.334624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.334681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.335027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.335057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.335468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.335862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.335893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.336149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.336180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.336274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.336304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.336602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.336646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.834 qpair failed and we were unable to recover it. 00:30:23.834 [2024-12-06 18:42:18.337038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.834 [2024-12-06 18:42:18.337068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.337416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.337445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.337827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.338206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.338236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.338462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.338490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.338709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.338742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.338971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.339001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.339369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.339398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.339769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.339799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.340189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.340219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.340582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.340612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.341032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.341061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.341281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.341310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.341536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.341567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.341906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.341936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.342308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.342339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.342544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.342575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.342943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.342974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.343184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.343213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.343474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.343506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.343774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.343805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.344176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.344206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.344563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.344593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.344967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.344999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.345390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.345770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.345801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.346038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.346067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.346499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.346876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.346906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.347209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.347238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.347461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.347490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.347720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.347751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.347989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.348017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.348233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.348264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.348611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.348646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.348924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.348953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.349347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.349376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.835 [2024-12-06 18:42:18.349753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.835 [2024-12-06 18:42:18.349784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.835 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.350170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.350200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.350388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.350425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.350781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.351197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.351226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.351589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.351618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.352051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.352082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.352322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.352353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.352595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.352628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.352993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.353024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.353261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.353293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.353668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.353701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.353986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.354016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.354243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.354273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.354650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.354680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.354893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.354922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.355163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.355194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.355513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.355543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.355752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.355781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.356013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.356043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.356389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.356419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.356780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.356811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.357165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.357195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.357537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.357567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.357950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.357981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.358334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.358366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.358715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.358745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.359074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.359104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.359385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.359416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.359535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.359568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.359938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.359969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.360340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.360371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.836 [2024-12-06 18:42:18.360771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.836 [2024-12-06 18:42:18.360802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.836 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.361163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.361191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.361560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.361589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.361942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.361973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.362335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.362365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.362594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.362626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.363032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.363063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.363440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.363470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.363862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.363894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.364248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.364279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.364656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.364693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.365029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.365059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.365247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.365277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.365649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.365681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.366073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.366103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.366373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.366402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.366658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.366692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.367066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.367097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.367412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.367443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.367817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.367850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.368196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.368234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.368486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.368826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.368855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.369243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.369273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.369508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.369537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.369906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.369937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.370184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.370214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.370573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.370603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.370972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.371002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.371262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.371509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.371538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.371890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.371921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.372020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.372057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.372417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.372799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.372829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.373193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.373222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.373580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.373609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.373997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.374027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.837 qpair failed and we were unable to recover it. 00:30:23.837 [2024-12-06 18:42:18.374399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.837 [2024-12-06 18:42:18.374429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.374800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.374832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.375209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.375238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.375598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.375887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.375919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.376292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.376322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.376692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.377104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.377135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.377494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.377756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.377785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.378007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.378037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.378415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.378770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.378807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.379194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.379558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.379586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.379961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.379992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.380269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.380299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.380650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.380681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.380913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.380942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.381303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.381689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.381721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.381968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.381997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.382368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.382397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.382723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.382752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.383126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.383156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.383366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.383394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.383631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.383673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.384031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.384060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.384408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.384437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.384663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.384692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.385059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.385089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.385434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.385463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.385819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.385849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.386217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.386246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.386602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.386630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.386894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.386927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.387268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.387299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.387658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.838 [2024-12-06 18:42:18.387687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.838 qpair failed and we were unable to recover it. 00:30:23.838 [2024-12-06 18:42:18.388076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.388104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.388467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.388496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.388884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.388914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.389152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.389184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.389437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.389467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.389832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.389863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.390224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.390594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.391016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.391046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.391418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.391686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.391718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.392109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.392138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.392345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.392373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.392761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.392791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.393154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.393191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.393543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.393572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.394035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.394427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.394455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.394819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.394858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.395146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.395176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.395527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.395556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.395882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.395911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.396292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.396322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.396746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.396775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.397133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.397161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.397520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.397549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.397935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.397965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.398330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.398358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.398579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.398608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.399031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.399061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.399415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.399445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.399783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.399813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.400173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.400201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.400562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.400590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.400842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.400870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.401227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.401256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.401626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.401676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.839 [2024-12-06 18:42:18.401924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.839 [2024-12-06 18:42:18.401952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.839 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.402186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.402215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.402452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.402483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.402712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.402742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.402988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.403017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.403392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.403422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.403665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.404090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.404118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.404486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.404516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.404898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.405289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.405319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.405526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.405556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.405913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.405944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.406228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.406258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.406485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.406514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.406786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.407147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.407176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.407404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.407437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.407840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.407870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.408228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.408256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.408626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.408661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.409038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.409066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.409432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.409462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.409807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.409837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.410186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.410216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.410438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.410467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.410796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.410826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.411189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.411218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.411577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.411606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.411855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.411885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.412262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.412292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.412653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.412684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.413030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.413058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.413430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.413458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.413699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.413729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.414074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.414103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.414451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.414480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.414821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.414850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.840 [2024-12-06 18:42:18.415216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.840 [2024-12-06 18:42:18.415245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.840 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.415607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.415635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.416036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.416064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.416284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.416312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.416664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.416694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.417043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.417071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.417429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.417459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.417819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.417849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.418210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.418238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.418603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.418631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.418889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.418918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.419293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.419321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.419579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.419608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.419997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.420028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.420373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.420404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.420758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.420788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.421041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.421070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.421394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.421751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.421780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.422135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.422165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.422532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.422561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.422929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.422959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.423295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.423324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.423696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.423725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.424044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.424442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.424470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.424668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.424698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.425063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.425345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.425373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.425733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.425762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.426140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.426168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.426537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.426565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.426720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.841 [2024-12-06 18:42:18.426997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.841 [2024-12-06 18:42:18.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.841 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.427399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.427428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.427812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.427842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.428104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.428346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.428375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.428744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.428773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.429140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.429169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.429531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.429561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.429767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.429796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.430133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.430161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.430380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.430410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.430764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.430794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.431160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.431189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.431556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.431590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.431890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.431920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.432139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.432167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.432526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.432554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.432756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.432786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.433080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.433108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.433460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.433487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.433876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.433905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.434118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.434146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.434602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.434630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.434986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.435015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.435381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.435635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.435670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.436025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.436054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.436412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.436442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.436803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.436833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.437055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.437083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.437450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.437479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.437850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.437881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.438143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.438174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.438418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.438447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.438791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.438821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.439189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.439218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.439590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.439619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.842 qpair failed and we were unable to recover it. 00:30:23.842 [2024-12-06 18:42:18.439999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.842 [2024-12-06 18:42:18.440029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.440400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.440430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.440801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.440833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.441078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.441106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.441452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.441481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.441858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.441889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.442256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.442284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.442510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.442539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.442886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.442915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.443277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.443305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.443681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.443710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.443967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.443996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.444251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.444281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.444525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.444554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.444903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.444934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.445149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.445177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.445453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.445859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.445889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.446240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.446270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.446559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.446588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.446955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.446985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.447204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.447233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.447580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.447609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.447879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.447909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.448290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.448318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.448416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.448444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.448701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.448731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.449097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.449125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.449493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.449522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.449747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.449777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.450197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.450226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.450568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.450596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.450995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.451025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.451370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.451398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.451620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.451656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.452041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.452070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.452415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.452443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.452667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.843 [2024-12-06 18:42:18.452698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.843 qpair failed and we were unable to recover it. 00:30:23.843 [2024-12-06 18:42:18.452933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.452961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.453298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.453328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.453682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.453712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.454078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.454106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.454468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.454496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.454868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.454897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.455116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.455146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.455520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.455548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.455914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.455943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.456304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.456334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.456705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.456735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.457114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.457143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.457506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.457535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.457777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.457807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.458181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.458210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.458601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.458937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.458967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.459333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.459362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.459725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.459761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.460123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.460151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.460511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.460540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.460889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.460919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.461307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.461335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.461542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.461571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.461943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.461972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.462203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.462232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.462613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.462663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.462993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.463022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.463247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.463275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.463506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.463534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.463902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.463932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.464301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.464330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.464565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.464593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.464874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.464904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.465284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.465312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.465715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.466053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.466083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.466301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.844 [2024-12-06 18:42:18.466330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.844 qpair failed and we were unable to recover it. 00:30:23.844 [2024-12-06 18:42:18.466717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.466747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.467123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.467152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.467506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.467533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.467847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.467876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.468225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.468255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.468515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.468544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.468917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.468946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.469166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.469194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.469553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.469582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.469826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.470220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.470249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.470628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.470682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.470892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.470921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.471237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.471265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.471657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.471687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.471917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.471945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.472186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.472215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.472582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.472611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.472991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.473020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.473392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.473420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.473804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.473840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.474099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.474482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.474513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.474866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.474896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.475147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.475175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.475542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.475943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.475972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.476208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.476239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.476601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.476629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.476846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.476875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.477246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.477275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.477652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.477683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.477936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.477965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.478324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.845 [2024-12-06 18:42:18.478354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.845 qpair failed and we were unable to recover it. 00:30:23.845 [2024-12-06 18:42:18.478737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.478767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.479143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.479174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.479533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.479562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.479795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.479827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.480205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.480234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.480441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.480614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.480653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.481019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.481048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.481270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.481299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.481615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.481650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.481947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.481977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.482343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.482372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.482745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.482774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.483138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.483166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.483533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.483563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.483810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.483843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.484208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.484237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.484591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.484620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.485014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.485378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.485790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.485819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.486037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.486065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.486381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.486409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.486751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.486782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.487137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.487519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.487547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.487904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.487941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.488186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.488215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.488585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.488613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.488897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.488932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.489287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.489317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.489667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.489697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.490029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.490058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.490425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.490455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.490818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.490848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.491218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.491248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.491616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.846 [2024-12-06 18:42:18.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.846 qpair failed and we were unable to recover it. 00:30:23.846 [2024-12-06 18:42:18.492029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.492057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.492438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.492467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.492810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.492840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.493230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.493260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.493623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.493662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.494018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.494047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.494401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.494431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.494795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.494826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.495198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.495229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.495461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.495489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.495887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.495917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.496150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.496178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.496541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.496569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.496937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.496966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.497335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.497365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.497742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.497771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.498142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.498173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.498558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.498587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.498902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.498932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.499273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.499303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.499668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.499697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.500082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.500295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.500661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.500689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.500926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.500954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.501281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.501308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.501528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.501557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.501948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.501978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.502187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.502215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.502463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.502498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.502888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.503209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.503578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.503606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.503985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.504014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.504221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.504249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.504573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.504602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.504938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.504969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.847 [2024-12-06 18:42:18.505321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.847 [2024-12-06 18:42:18.505350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.847 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.505720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.505750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.506107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.506136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.506473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.506501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.506861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.507276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.507305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.507685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.507715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.508069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.508099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.508462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.508490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.508864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.508894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.509100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.509130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.509501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.509530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.509891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.509921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.510300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.510330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.510711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.510740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.511094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.511124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.511494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.511523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.511750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.511778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.512154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.512183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.512540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.512570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.512946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.512977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.513228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.513604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.513634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.514014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.514043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.514384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.514414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.514774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.514805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.515026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.515054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.515382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.515411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.515654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.515685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.516076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.516105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.516458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.516486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.516699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.516727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.517129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.517164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.517503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.517541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.517908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.517937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.518150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.518179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.518545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.518573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.518947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.518978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.848 [2024-12-06 18:42:18.519207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.848 [2024-12-06 18:42:18.519237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.848 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.519585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.519614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.519992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.520023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.520247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.520275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.520519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.520549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.520775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.520805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.521184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.521214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.521568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.521598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.521822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.521853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.522212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.522242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.522600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.522631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.522989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.523022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.523408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.523439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.523800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.523831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.524052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.524081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.524364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.524394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.524766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.525167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.525196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.525579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.525609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.525991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.526022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.526376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.526676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.526712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.526942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.526971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.527335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.527365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.527721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.527753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.528133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.528162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.528522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.528551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.528777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.528805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.529069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.529097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.529461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.529492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.529733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.529763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.530137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.530173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.530501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.530530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.530894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.531271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.531307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.531665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.531695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.532008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.532038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.532401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.532431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.849 qpair failed and we were unable to recover it. 00:30:23.849 [2024-12-06 18:42:18.532788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.849 [2024-12-06 18:42:18.532818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.533179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.533208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.533466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.533494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.533740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.533769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.534124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.534153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.534501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.534531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.534897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.534927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.535278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.535308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.535517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.535545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.535908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.535939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.536317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.536347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.536594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.536623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.537018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.537048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.537406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.537436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.537797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.537827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.538202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.538231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.538597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.538625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.539008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.539037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.539391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.539778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.539809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.540254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.540608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.540645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.540974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.541003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.541253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.541283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.541633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.541670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.542026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.542056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.542430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.542459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.542817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.542847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.543206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.543235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.543495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.543526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.543854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.543885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.544141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.544169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.544510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.544539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.850 [2024-12-06 18:42:18.544868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.850 [2024-12-06 18:42:18.544898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.850 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.545272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.545302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.545644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.545674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.546035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.546071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.546440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.546469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.546845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.546875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.547226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.547256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.547702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.547732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.547946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.547974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.548213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.548243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.548611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.548646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.549024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.549052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.549309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.549338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.549708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.549738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.550094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.550123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.550219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.550247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.550557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.550585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.550808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.550837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.551008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.551036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.551388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.551416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.551775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.551806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.551902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.551930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.552265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.552294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.552526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.552555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.552911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.552940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.553159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.553190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.553547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.553576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.553956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.553986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.554355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.554384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.554726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.554756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.554997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.555027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.555368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.555398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.555757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.555787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.556122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.556150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.556523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.556552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.556899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.556930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.557158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.557187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.557539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.557568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.851 qpair failed and we were unable to recover it. 00:30:23.851 [2024-12-06 18:42:18.557950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.851 [2024-12-06 18:42:18.557980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.558336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.558366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.558729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.558759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.559002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.559031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.559387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.559417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.559788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.559824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.560164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.560193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.560547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.560577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.560955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.560984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.561157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.561188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.561445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.561476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.561877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.561907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.562169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.562198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.562545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.562574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.562815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.562844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.563221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.563252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.563615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.563655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.564005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.564034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.564256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.564285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.564504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.564538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.564780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.564811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.565193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.565221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.565594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.565623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.565903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.565932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.566312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.566528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.566557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.566926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.566957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.567305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.567335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.567550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.567581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.567945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.567984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.568240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.568270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.568627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.568666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.569023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.569053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.569319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.569350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.569611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.569652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.569874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.570290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.570659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.852 [2024-12-06 18:42:18.570690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.852 qpair failed and we were unable to recover it. 00:30:23.852 [2024-12-06 18:42:18.571073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.571104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.571285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.571314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.571579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.571610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.571874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.571905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.572267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.572295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.572645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.572676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.572945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.572974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.573325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.573360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.573601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.573629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.574033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.574062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.574375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.574403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.574770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.574800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.575225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.575254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.575467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.575497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.575734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.575764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.576134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.576163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.576528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.576558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.576929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.576959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.577330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.577359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.577596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.577624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.578010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.578040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.578407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.578438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.578799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.578830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.579207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.579237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.579649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.579679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.580064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.580094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.580313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.580342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.580721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.580751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.581128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.581170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.581521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.581552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.581786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.581815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.582185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.582213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.582448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.582478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.582812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.582843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.583217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.583248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.583591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.583621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.584029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.853 [2024-12-06 18:42:18.584390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.853 [2024-12-06 18:42:18.584419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.853 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.584665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.584696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.585133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.585164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.585529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.585558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.585924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.585953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.586313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.586342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.586707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.587087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.587115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.587511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.587863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.587894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.588241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.588277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.588661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.588693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.589036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.589064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.589243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.589275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.589644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.589677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.590034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.590065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.590386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.590415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.590812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.590842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.591208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.591240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.591501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.591530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.591862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.591892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.591994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.592023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b0000b90 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.592518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.592626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.593174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.593615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.593678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.593812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.593872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.594001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.594032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.594392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.594423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.594822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.594857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.595185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.595214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.595624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.595664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.596048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.596079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.596315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.596347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.596703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.596733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.597110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.597141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.597493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.597526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:23.854 [2024-12-06 18:42:18.597899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.854 [2024-12-06 18:42:18.597931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:23.854 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.598281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.598336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.598700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.598733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.599091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.599119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.599406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.599436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.599839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.599870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.600200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.600230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.600585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.600615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.600849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.600883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.601276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.601309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.601698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.601729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.602088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.602119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.602495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.602527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.602774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.602810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.602948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.602978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.603355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.603388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.603743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.603774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.604005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.604034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.604384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.604416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.604767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.604798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.605152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.605183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.605565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.605598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.605975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.606008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.606265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.606294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.606681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.606713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.607043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.607075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.607315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.607347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.607698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.607732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.608122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.608154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.608472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.608501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.608727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.126 [2024-12-06 18:42:18.608758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.126 qpair failed and we were unable to recover it. 00:30:24.126 [2024-12-06 18:42:18.609013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.609043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.609274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.609307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.609660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.609691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.610070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.610100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.610316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.610346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.610739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.610770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.611133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.611166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.611521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.611551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.611902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.611934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.612334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.612366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.612602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.612647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.613068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.613102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.613448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.613478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.613739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.613772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.614052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.614082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.614461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.614491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.614707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.614739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.614989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.615021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.615318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.615348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.615725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.615757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.616138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.616169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.616549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.616579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.616940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.616971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.617335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.617365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.617477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.617506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.617884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.617914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.618286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.618317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.618707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.618738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.618853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.618887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11af0c0 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.619324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.619431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.620037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.620138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.620595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.620634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.621011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.621045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.621258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.621290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.621662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.621695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.622034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.622064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.622400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.622431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.622890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.127 [2024-12-06 18:42:18.622993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.127 qpair failed and we were unable to recover it. 00:30:24.127 [2024-12-06 18:42:18.623328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.623369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.623815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.623849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.624129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.624161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.624510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.624541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.624895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.624926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.625281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.625310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.625694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.625726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.626153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.626182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.626626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.626673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.627055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.627086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.627296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.627325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.627684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.627716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.627982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.628011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.628377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.628414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.628644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.628676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.629053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.629083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.629428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.629460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.629817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.629848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.630090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.630461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.630490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.630859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.630892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.631237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.631268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.631634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.631679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.632031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.632457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.632487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.632849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.632880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.633249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.633278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.633636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.633675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.634037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.634066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.634446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.634475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.634850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.634882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.635163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.635198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.635589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.635618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.635982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.636012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.636359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.636390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.636748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.636780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.637152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.637182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.637540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.128 [2024-12-06 18:42:18.637571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.128 qpair failed and we were unable to recover it. 00:30:24.128 [2024-12-06 18:42:18.637795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.637828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.638202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.638232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.638460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.638489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.638845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.638877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.639226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.639265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.639651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.639682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.640038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.640067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.640413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.640442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.640815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.640847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.641218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.641248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.641619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.641656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.641794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.641822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.642201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.642232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.642567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.642605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.642995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.643027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.643393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.643431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.643650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.643680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.644068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.644417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.644448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.644814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.644845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.645052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.645082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.645336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.645366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.645588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.645618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.646032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.646063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.646423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.646454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.646678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.646708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.647108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.647139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.647471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.647502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.647875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.647905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.648180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.648466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.648499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.648884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.648916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.649296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.649556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.649589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.649915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.649945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.650291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.650322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.650671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.650702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.129 [2024-12-06 18:42:18.650924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.129 [2024-12-06 18:42:18.650953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.129 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.651202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.651231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.651598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.651628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.651855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.651884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.652242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.652272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.652646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.652677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.653036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.653064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.653467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.653496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.653748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.653778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.654110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.654141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.654483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.654513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.654878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.654908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.655264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.655293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.655658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.655687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.655921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.655950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.656215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.656586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.656615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.656971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.657001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.657363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.657756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.657786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.658000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.658029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.658236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.658264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.658629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.658669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.658998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.659032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.659382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.659412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.659777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.659808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.660180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.660208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.660576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.660604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.660870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.660901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.661163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.661191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.661428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.661456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.661696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.130 [2024-12-06 18:42:18.661726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.130 qpair failed and we were unable to recover it. 00:30:24.130 [2024-12-06 18:42:18.661942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.661973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.662324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.662353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.662730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.662759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.662858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.662886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.663249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.663277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.663530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.663559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.663948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.663978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.664183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.664212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.664310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.664339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.664566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.664595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.665005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.665036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.665415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.665444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.665808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.665838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.666242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.666272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.666635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.666672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.667017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.667045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.667273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.667302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.667656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.667687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.668061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.668090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.668473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.668502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.668870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.668900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.669280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.669308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.669676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.670077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.670107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.670493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.670522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.670894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.670925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.671165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.671202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.671581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.671610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.671828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.671858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.672223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.672254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.672675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.672705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.673044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.673074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.673439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.673469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.673887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.673917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.674279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.674307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.674595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.674624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.674972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.675002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.131 [2024-12-06 18:42:18.675367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.131 [2024-12-06 18:42:18.675396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.131 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.675778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.675807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.676187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.676614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.676652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.676917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.676950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.677295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.677326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.677705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.677736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.678083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.678114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.678484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.678513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.678867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.678897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.679280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.679309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.679680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.679710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.680078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.680108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.680494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.680523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.680754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.680783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.681051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.681404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.681439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.681818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.681849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.682237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.682268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.682634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.682672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.682987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.683016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.683292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.683320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.683682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.683712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.683955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.683984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.684228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.684257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.684633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.684671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.685049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.685078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.685434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.685463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.685686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.685715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.685971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.686007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.686378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.686407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.686635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.686689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.687051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.687387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.687417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.687749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.687778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.688160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.688189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.688550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.688578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.688822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.688853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.132 qpair failed and we were unable to recover it. 00:30:24.132 [2024-12-06 18:42:18.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.132 [2024-12-06 18:42:18.689280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.689658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.689687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.690048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.690078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.690330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.690358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.690617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.690687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.133 [2024-12-06 18:42:18.691067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.691108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:24.133 [2024-12-06 18:42:18.691472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.691504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.133 [2024-12-06 18:42:18.691889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.691921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.133 [2024-12-06 18:42:18.692175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.133 [2024-12-06 18:42:18.692205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.692615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.692653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.692918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.693276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.693308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.693662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.693693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.694056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.694086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.694467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.694497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.694851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.694882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.695275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.695305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.695683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.695717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.696066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.696099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.696232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.696267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.696658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.696688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.697065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.697096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.697284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.697314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.697657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.697687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.697930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.698324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.698354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.698770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.698799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.699181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.699210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.699454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.699483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.699747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.699785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.700172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.700576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.700608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.701015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.701046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.701293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.701321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.701697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.701727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.702108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.133 [2024-12-06 18:42:18.702138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.133 qpair failed and we were unable to recover it. 00:30:24.133 [2024-12-06 18:42:18.702366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.702395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.702763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.702794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.703123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.703154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.703503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.703535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.703897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.703927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.704230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.704259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.704481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.704510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.704919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.704951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.705304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.705335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.705720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.705750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.706096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.706125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.706361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.706391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.706713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.706742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.707135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.707167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.707400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.707428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.707685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.707716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.707947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.707978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.708223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.708252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.708611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.708647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.708874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.708903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.709286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.709316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.709549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.709577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.710013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.710045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.710405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.710436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.710816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.710846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.711105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.711137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.711492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.711522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.711862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.711893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.712102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.712131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.712506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.712535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.712884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.712916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.713305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.713334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.713696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.713726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.714084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.714120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.714371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.714765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.714795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.715190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.715219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.715590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.134 [2024-12-06 18:42:18.715620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.134 qpair failed and we were unable to recover it. 00:30:24.134 [2024-12-06 18:42:18.715979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.716008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.716262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.716294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.716667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.716698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.717068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.717099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.717455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.717484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.717717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.717747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.717884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.717915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63b4000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.718326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.718436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.718964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.719069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.719523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.719565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.719920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.720025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.720465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.720502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.720890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.720925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.721299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.721328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.721590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.721620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.721914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.721944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.722074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.722331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.722361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.722567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.722598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.722864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.722896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.723246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.723276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.723377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.723406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.723769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.723801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.724130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.724160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.724407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.724439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.724785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.724815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.725031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.725061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.725298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.725326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.725689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.725721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.725923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.725952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.726331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.726360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.726751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.727135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.135 [2024-12-06 18:42:18.727165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.135 qpair failed and we were unable to recover it. 00:30:24.135 [2024-12-06 18:42:18.727525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.727555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.727911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.727941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.728329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.728359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.728736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.728768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.729129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.729159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.729402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.729431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.729772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.729805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.730053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.730083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.730493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.730527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.730740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.730770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.731065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.731094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.731341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.731371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.731609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.731647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.732020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.732052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.732286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.732315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.732586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.732618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.732990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.733022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.733250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.733280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.136 [2024-12-06 18:42:18.733649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.733681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.136 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.136 [2024-12-06 18:42:18.734052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.734348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.734382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.734727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.734758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.735152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.735183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.735447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.735476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.735748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.735779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.736155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.736525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.736554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.736779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.736815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.737203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.737233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.737585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.737615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.737845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.737874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.738165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.738193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.738553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.738581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.738853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.738882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.739233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.739262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.739622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.739676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.136 qpair failed and we were unable to recover it. 00:30:24.136 [2024-12-06 18:42:18.740031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.136 [2024-12-06 18:42:18.740061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.740313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.740342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.740687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.740717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.740982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.741010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.741235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.741264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.741633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.741671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.741778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.741810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.742175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.742205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.742570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.742599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.742855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.742884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.743182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.743211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.743578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.743606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.743985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.744016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.744244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.744272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.744666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.744696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.745051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.745080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.745477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.745505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.745741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.745770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.746141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.746170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.746544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.746572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.746964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.746993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.747367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.747761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.747792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.748172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.748201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.748429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.748457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.748862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.748892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.749293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.749322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.749683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.749712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.750027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.750056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.750398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.750427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.750681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.750714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.751062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.751099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.751434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.751463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.751810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.751840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.752211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.752241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.752617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.752655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.753050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.753079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.753424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.137 [2024-12-06 18:42:18.753452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.137 qpair failed and we were unable to recover it. 00:30:24.137 [2024-12-06 18:42:18.753792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.753823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.754184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.754213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.754570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.754599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.754968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.754998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.755306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.755336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.755701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.755731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.756172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.756201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.756573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.756603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.756991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.757021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.757253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.757282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.757645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.757674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.758039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.758067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.758403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.758432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.758840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.759211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.759240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.759606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.759634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.759917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.759947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.760297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.760327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.760685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.760716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.761100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.761129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.761539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.761569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.761951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.761982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.762362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.762391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.762760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.762790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.763164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.763193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.763559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.763589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.763951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.763983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.764119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.764375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.764403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.764776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.764806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.765200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.765569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.765598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.765959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.765989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.766370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.766407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.766775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.766805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.767143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.767173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.767426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.767455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.767831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.138 [2024-12-06 18:42:18.767862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.138 qpair failed and we were unable to recover it. 00:30:24.138 [2024-12-06 18:42:18.768219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.768247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.768634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 Malloc0 00:30:24.139 [2024-12-06 18:42:18.768912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.768941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.769312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.769342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.139 [2024-12-06 18:42:18.769711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.769741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:24.139 [2024-12-06 18:42:18.770128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.770158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.139 [2024-12-06 18:42:18.770532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.139 [2024-12-06 18:42:18.770562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.770933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.770963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.771189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.771220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.771607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.771645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.771983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.772011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.772267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.772296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.772556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.772585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.772948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.772978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.773357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.773386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.773757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.773787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.774143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.774171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.774393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.774422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.774785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.774815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.775189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.775218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.775594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.775629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.776013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.776042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.776060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.139 [2024-12-06 18:42:18.776264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.776293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.776661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.776691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.777035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.777064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.777416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.777446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.777812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.777841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.778219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.778247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.778652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.778681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.779067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.779097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.779459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.779725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.139 [2024-12-06 18:42:18.779754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.139 qpair failed and we were unable to recover it. 00:30:24.139 [2024-12-06 18:42:18.780099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.780129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.780494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.780530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.780912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.780941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.781306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.781335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.781719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.781748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.782101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.782130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.782507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.782537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.782780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.782810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.783082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.783112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.783470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.783499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.783884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.783914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.784145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.784173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.784546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.784575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.784940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.784971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.140 [2024-12-06 18:42:18.785340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.785371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.140 [2024-12-06 18:42:18.785717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.785747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.140 [2024-12-06 18:42:18.786105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.786135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.786505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.786534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.786864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.786893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.787270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.787298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.787503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.787531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.787899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.787928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.788289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.788317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.788692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.788721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.789068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.789426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.789455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.789615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.789652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.790013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.790042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.790409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.790439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.790702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.790730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.790969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.790999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.791361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.791390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.791764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.791794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.792158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.792188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.792562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.792591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.140 qpair failed and we were unable to recover it. 00:30:24.140 [2024-12-06 18:42:18.792976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.140 [2024-12-06 18:42:18.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.793221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.793250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.793599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.793629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.794009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.794038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.794406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.794442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.794806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.794835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.795041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.795070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.795443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.795472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.795814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.795846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.796093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.796121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.796364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.796392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.796734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.796764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.797132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.797162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.141 [2024-12-06 18:42:18.797536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.797565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.141 [2024-12-06 18:42:18.797946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.797977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.141 [2024-12-06 18:42:18.798337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.798365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.798734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.798765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.799130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.799160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.799511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.799540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.799898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.799928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.800295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.800324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.800661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.800692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.801061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.801090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.801464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.801492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.801768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.801797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.802183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.802212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.802472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.802501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.802907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.802937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.803289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.803319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.803715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.804183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.804213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.804436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.804467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.804832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.804863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.805206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.805236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.805596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.805625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.141 qpair failed and we were unable to recover it. 00:30:24.141 [2024-12-06 18:42:18.805821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.141 [2024-12-06 18:42:18.805850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.806235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.806264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.806655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.806684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.807037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.807076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.807418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.807712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.807742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.807954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.807983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.808219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.808247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.808502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.808535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.808635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.808678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.809078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.142 [2024-12-06 18:42:18.809485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.809514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.142 [2024-12-06 18:42:18.809784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.809812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.142 [2024-12-06 18:42:18.810031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.810059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.142 [2024-12-06 18:42:18.810438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.810467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.810872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.810901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.811156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.811184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.811585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.811615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.811994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.812023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.812379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.812410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.812809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.813157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.813186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.813552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.813582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.813948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.813980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.814348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.814377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.814628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.814672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.815018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.815049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.815411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.815440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.815653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.815684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.816039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.816069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.816363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.142 [2024-12-06 18:42:18.816393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f63bc000b90 with addr=10.0.0.2, port=4420 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 [2024-12-06 18:42:18.816441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.142 [2024-12-06 18:42:18.827355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.142 [2024-12-06 18:42:18.827472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.142 [2024-12-06 18:42:18.827516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.142 [2024-12-06 18:42:18.827535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.142 [2024-12-06 18:42:18.827552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.142 [2024-12-06 18:42:18.827597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.142 qpair failed and we were unable to recover it. 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.142 18:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2325296 00:30:24.143 [2024-12-06 18:42:18.837194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.837287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.837312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.837325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.837337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.837362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.847207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.847278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.847303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.847315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.847325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.847351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.857197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.857280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.857298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.857306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.857313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.857338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.867170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.867241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.867260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.867267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.867274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.867290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.877184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.877259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.877278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.877285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.877292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.877309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.887165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.887232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.887251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.887258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.887265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.887282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.143 [2024-12-06 18:42:18.897229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.143 [2024-12-06 18:42:18.897312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.143 [2024-12-06 18:42:18.897362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.143 [2024-12-06 18:42:18.897370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.143 [2024-12-06 18:42:18.897376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.143 [2024-12-06 18:42:18.897406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.143 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.907310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.907387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.907407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.907415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.907422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.907440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.917316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.917384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.917401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.917409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.917415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.917432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.927324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.927384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.927401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.927408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.927415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.927432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.937328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.937398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.937415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.937422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.937428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.937445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.947380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.947460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.947504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.947514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.947522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.947547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.957408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.957478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.957499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.957507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.957513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.957532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.967391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.967460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.967478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.967486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.967492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.406 [2024-12-06 18:42:18.967510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.406 qpair failed and we were unable to recover it. 00:30:24.406 [2024-12-06 18:42:18.977482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.406 [2024-12-06 18:42:18.977549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.406 [2024-12-06 18:42:18.977566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.406 [2024-12-06 18:42:18.977574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.406 [2024-12-06 18:42:18.977581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:18.977597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:18.987540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:18.987611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:18.987630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:18.987644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:18.987662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:18.987680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:18.997486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:18.997554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:18.997574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:18.997581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:18.997587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:18.997604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.007549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.007617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.007635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.007649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.007656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.007674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.017583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.017655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.017673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.017681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.017687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.017704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.027659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.027733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.027751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.027759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.027765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.027782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.037657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.037730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.037747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.037755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.037761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.037778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.047744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.047817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.047834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.047841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.047848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.047864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.057789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.057853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.057870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.057878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.057884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.057901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.067835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.067917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.067935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.067942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.067949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.067965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.077794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.077906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.077931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.077939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.077947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.077965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.087844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.087916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.087934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.087941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.087948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.087964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.097850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.097951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.097969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.097976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.097983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.407 [2024-12-06 18:42:19.097999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.407 qpair failed and we were unable to recover it. 00:30:24.407 [2024-12-06 18:42:19.107832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.407 [2024-12-06 18:42:19.107941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.407 [2024-12-06 18:42:19.107960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.407 [2024-12-06 18:42:19.107968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.407 [2024-12-06 18:42:19.107974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.107990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.117903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.117960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.117978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.117985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.117997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.118014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.127976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.128088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.128105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.128112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.128120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.128136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.137971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.138033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.138050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.138057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.138063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.138079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.148004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.148085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.148102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.148110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.148116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.148132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.157989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.158057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.158074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.158081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.158087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.158103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.168055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.168123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.168140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.168147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.168154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.168170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.408 [2024-12-06 18:42:19.178184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.408 [2024-12-06 18:42:19.178296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.408 [2024-12-06 18:42:19.178313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.408 [2024-12-06 18:42:19.178321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.408 [2024-12-06 18:42:19.178328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.408 [2024-12-06 18:42:19.178344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.408 qpair failed and we were unable to recover it. 00:30:24.671 [2024-12-06 18:42:19.188056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.671 [2024-12-06 18:42:19.188126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.671 [2024-12-06 18:42:19.188145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.671 [2024-12-06 18:42:19.188153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.671 [2024-12-06 18:42:19.188160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.671 [2024-12-06 18:42:19.188176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-12-06 18:42:19.198131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.671 [2024-12-06 18:42:19.198193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.671 [2024-12-06 18:42:19.198213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.671 [2024-12-06 18:42:19.198221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.671 [2024-12-06 18:42:19.198227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.671 [2024-12-06 18:42:19.198243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-12-06 18:42:19.208169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.671 [2024-12-06 18:42:19.208233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.671 [2024-12-06 18:42:19.208258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.671 [2024-12-06 18:42:19.208265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.671 [2024-12-06 18:42:19.208271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.671 [2024-12-06 18:42:19.208288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-12-06 18:42:19.218209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.671 [2024-12-06 18:42:19.218324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.671 [2024-12-06 18:42:19.218343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.671 [2024-12-06 18:42:19.218350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.671 [2024-12-06 18:42:19.218357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.671 [2024-12-06 18:42:19.218373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.228269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.228351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.228369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.228376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.228382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.228398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.238270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.238348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.238384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.238393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.238401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.238426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.248326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.248403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.248438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.248456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.248463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.248488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.258341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.258416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.258436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.258444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.258451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.258469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.268378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.268453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.268471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.268478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.268485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.268502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.278385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.278452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.278470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.278478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.278485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.278501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.288371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.288436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.288455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.288462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.288468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.288491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.298346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.298415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.298437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.298445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.298452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.298470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.308503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.308579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.308597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.308605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.308611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.308628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.318532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.318592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.318610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.318618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.318624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.318647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.328504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.328597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.328615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.328623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.328629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.328652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.338441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.338528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.338545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.338553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.338559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.338576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.348647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.672 [2024-12-06 18:42:19.348754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.672 [2024-12-06 18:42:19.348772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.672 [2024-12-06 18:42:19.348779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.672 [2024-12-06 18:42:19.348785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.672 [2024-12-06 18:42:19.348802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-12-06 18:42:19.358620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.358688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.358708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.358715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.358722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.358739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.368616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.368686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.368704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.368712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.368718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.368735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.378735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.378841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.378859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.378872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.378879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.378895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.388782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.388900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.388917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.388925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.388931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.388948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.398655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.398735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.398756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.398764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.398771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.398793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.408781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.408836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.408855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.408862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.408869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.408885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.418831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.418901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.418918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.418926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.418932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.418955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.428875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.428951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.428968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.428976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.428982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.428999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.438887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.438948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.438965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.438973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.438981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.438998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-12-06 18:42:19.448927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.673 [2024-12-06 18:42:19.448995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.673 [2024-12-06 18:42:19.449012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.673 [2024-12-06 18:42:19.449021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.673 [2024-12-06 18:42:19.449028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.673 [2024-12-06 18:42:19.449045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.458837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.458944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.458961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.458969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.458975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.458992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.469009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.469086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.469106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.469115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.469121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.469138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.478994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.479056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.479073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.479081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.479087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.479104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.489032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.489098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.489115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.489123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.489129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.489145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.499082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.499160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.499178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.499186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.499193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.499210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.509111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.509179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.509202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.509209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.509216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.509232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.519121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.519230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.519247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.519255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.519261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.519278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.529142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.529209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.529227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.529236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.529243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.936 [2024-12-06 18:42:19.529259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.936 qpair failed and we were unable to recover it. 00:30:24.936 [2024-12-06 18:42:19.539200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.936 [2024-12-06 18:42:19.539270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.936 [2024-12-06 18:42:19.539288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.936 [2024-12-06 18:42:19.539295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.936 [2024-12-06 18:42:19.539301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.539317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.549168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.549246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.549266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.549274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.549286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.549309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.559265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.559369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.559389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.559396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.559403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.559420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.569267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.569333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.569351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.569358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.569365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.569381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.579337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.579426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.579443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.579450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.579457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.579473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.589384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.589459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.589477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.589484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.589490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.589506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.599365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.599460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.599478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.599485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.599492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.599508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.609338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.609412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.609429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.609436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.609442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.609459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.619511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.619579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.619597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.619604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.619611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.619627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.629529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.629612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.629629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.629642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.629649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.629665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.639532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.639598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.639621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.639628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.639634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.639658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.649541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.649604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.649622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.649630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.649636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.649659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.659579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.659653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.659671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.659678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.659685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.659702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.937 qpair failed and we were unable to recover it. 00:30:24.937 [2024-12-06 18:42:19.669697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.937 [2024-12-06 18:42:19.669785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.937 [2024-12-06 18:42:19.669803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.937 [2024-12-06 18:42:19.669811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.937 [2024-12-06 18:42:19.669817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.937 [2024-12-06 18:42:19.669833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.938 qpair failed and we were unable to recover it. 00:30:24.938 [2024-12-06 18:42:19.679634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.938 [2024-12-06 18:42:19.679710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.938 [2024-12-06 18:42:19.679729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.938 [2024-12-06 18:42:19.679736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.938 [2024-12-06 18:42:19.679752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.938 [2024-12-06 18:42:19.679769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.938 qpair failed and we were unable to recover it. 00:30:24.938 [2024-12-06 18:42:19.689688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.938 [2024-12-06 18:42:19.689770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.938 [2024-12-06 18:42:19.689788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.938 [2024-12-06 18:42:19.689795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.938 [2024-12-06 18:42:19.689801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.938 [2024-12-06 18:42:19.689818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.938 qpair failed and we were unable to recover it. 00:30:24.938 [2024-12-06 18:42:19.699737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.938 [2024-12-06 18:42:19.699829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.938 [2024-12-06 18:42:19.699846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.938 [2024-12-06 18:42:19.699854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.938 [2024-12-06 18:42:19.699860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.938 [2024-12-06 18:42:19.699876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.938 qpair failed and we were unable to recover it. 00:30:24.938 [2024-12-06 18:42:19.709670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.938 [2024-12-06 18:42:19.709771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.938 [2024-12-06 18:42:19.709788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.938 [2024-12-06 18:42:19.709795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.938 [2024-12-06 18:42:19.709802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:24.938 [2024-12-06 18:42:19.709817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:24.938 qpair failed and we were unable to recover it. 00:30:25.200 [2024-12-06 18:42:19.719746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.200 [2024-12-06 18:42:19.719809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.200 [2024-12-06 18:42:19.719827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.200 [2024-12-06 18:42:19.719835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.200 [2024-12-06 18:42:19.719841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.200 [2024-12-06 18:42:19.719857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.200 qpair failed and we were unable to recover it. 00:30:25.200 [2024-12-06 18:42:19.729812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.200 [2024-12-06 18:42:19.729873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.200 [2024-12-06 18:42:19.729889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.200 [2024-12-06 18:42:19.729897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.200 [2024-12-06 18:42:19.729903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.200 [2024-12-06 18:42:19.729919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.200 qpair failed and we were unable to recover it. 00:30:25.200 [2024-12-06 18:42:19.739828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.739894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.739911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.739918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.739925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.739940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.749913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.750015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.750031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.750038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.750045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.750061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.759899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.759971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.759989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.759996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.760002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.760018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.769857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.769949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.769970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.769977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.769984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.770002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.780000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.780070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.780089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.780096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.780103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.780120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.790021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.790098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.790116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.790123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.790129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.790146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.800042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.800113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.800132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.800139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.800146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.800162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.810068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.810130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.810148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.810160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.810167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.810184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.820089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.820155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.820172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.820180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.820186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.820202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.830145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.830213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.830230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.830237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.830244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.830260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.840145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.840209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.840226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.840233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.840239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.840256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.850289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.850382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.850399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.850407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.850413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.850436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.860236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.860311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.201 [2024-12-06 18:42:19.860347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.201 [2024-12-06 18:42:19.860357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.201 [2024-12-06 18:42:19.860364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.201 [2024-12-06 18:42:19.860388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.201 qpair failed and we were unable to recover it. 00:30:25.201 [2024-12-06 18:42:19.870273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.201 [2024-12-06 18:42:19.870350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.870386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.870396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.870404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.870428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.880256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.880324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.880345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.880353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.880359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.880378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.890299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.890367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.890386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.890393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.890400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.890417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.900330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.900454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.900474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.900482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.900489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.900506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.910399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.910480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.910497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.910505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.910511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.910529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.920363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.920425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.920443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.920451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.920458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.920474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.930364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.930429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.930447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.930454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.930460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.930476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.940445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.940547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.940564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.940579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.940585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.940603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.950427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.950510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.950528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.950536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.950542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.950559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.960529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.960591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.960608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.960616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.960622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.960644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.970553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.970626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.970649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.970657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.970663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.970680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.202 [2024-12-06 18:42:19.980548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.202 [2024-12-06 18:42:19.980613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.202 [2024-12-06 18:42:19.980630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.202 [2024-12-06 18:42:19.980643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.202 [2024-12-06 18:42:19.980650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.202 [2024-12-06 18:42:19.980672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.202 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:19.990643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:19.990767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.465 [2024-12-06 18:42:19.990785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.465 [2024-12-06 18:42:19.990792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.465 [2024-12-06 18:42:19.990799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.465 [2024-12-06 18:42:19.990816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:20.000510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:20.000570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.465 [2024-12-06 18:42:20.000589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.465 [2024-12-06 18:42:20.000596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.465 [2024-12-06 18:42:20.000603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.465 [2024-12-06 18:42:20.000619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:20.010664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:20.010749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.465 [2024-12-06 18:42:20.010813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.465 [2024-12-06 18:42:20.010836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.465 [2024-12-06 18:42:20.010859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.465 [2024-12-06 18:42:20.010901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:20.020712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:20.020800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.465 [2024-12-06 18:42:20.020835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.465 [2024-12-06 18:42:20.020849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.465 [2024-12-06 18:42:20.020860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.465 [2024-12-06 18:42:20.020892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:20.030747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:20.030858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.465 [2024-12-06 18:42:20.030893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.465 [2024-12-06 18:42:20.030906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.465 [2024-12-06 18:42:20.030916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.465 [2024-12-06 18:42:20.030950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-12-06 18:42:20.040748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.465 [2024-12-06 18:42:20.040830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.040866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.040880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.040894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.040929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.050753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.050828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.050854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.050862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.050870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.050892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.060831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.060900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.060923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.060931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.060938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.060958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.070974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.071057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.071086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.071094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.071101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.071121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.080883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.080964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.080997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.081007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.081016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.081041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.090909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.091004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.091023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.091031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.091039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.091057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.100840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.100909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.100927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.100935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.100942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.100959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.111007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.111090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.111109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.111117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.111131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.111149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.121010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.121103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.121121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.121129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.121136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.121154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.131031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.131104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.131121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.131129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.131136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.131153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.141061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.141134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.141161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.141169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.141176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.141197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.151101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.151173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.151190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.151198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.151205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.151222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.161019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.161099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.161117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.466 [2024-12-06 18:42:20.161124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.466 [2024-12-06 18:42:20.161131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.466 [2024-12-06 18:42:20.161148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-12-06 18:42:20.171154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.466 [2024-12-06 18:42:20.171216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.466 [2024-12-06 18:42:20.171233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.171241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.171248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.171264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.181204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.181275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.181292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.181301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.181308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.181324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.191245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.191314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.191331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.191339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.191346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.191362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.201244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.201304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.201328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.201336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.201342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.201359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.211250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.211313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.211330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.211338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.211345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.211360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.221266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.221392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.221413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.221425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.221432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.221449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.231366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.231443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.231463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.231471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.231478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.231495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-12-06 18:42:20.241352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.467 [2024-12-06 18:42:20.241419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.467 [2024-12-06 18:42:20.241437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.467 [2024-12-06 18:42:20.241445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.467 [2024-12-06 18:42:20.241457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.467 [2024-12-06 18:42:20.241474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.730 [2024-12-06 18:42:20.251369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.730 [2024-12-06 18:42:20.251434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.730 [2024-12-06 18:42:20.251453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.730 [2024-12-06 18:42:20.251460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.730 [2024-12-06 18:42:20.251467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.730 [2024-12-06 18:42:20.251484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-12-06 18:42:20.261402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.730 [2024-12-06 18:42:20.261470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.730 [2024-12-06 18:42:20.261488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.730 [2024-12-06 18:42:20.261496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.730 [2024-12-06 18:42:20.261502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.730 [2024-12-06 18:42:20.261519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-12-06 18:42:20.271463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.271537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.271555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.271563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.271570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.271587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.281462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.281523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.281542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.281550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.281557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.281573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.291465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.291523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.291541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.291548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.291555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.291571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.301529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.301597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.301615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.301623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.301630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.301651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.311477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.311552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.311568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.311576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.311582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.311598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.321588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.321652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.321670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.321677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.321684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.321700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.331634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.331706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.331724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.331732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.331738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.331755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.341678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.341743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.341761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.341769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.341776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.341792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.351616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.351698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.351715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.351723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.351729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.351746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.361723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.361792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.361810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.361817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.361824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.361840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.371752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.371832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.371850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.371863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.371870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.371887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.381810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.381909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.381926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.381934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.381941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.381957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-12-06 18:42:20.391837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.731 [2024-12-06 18:42:20.391915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.731 [2024-12-06 18:42:20.391933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.731 [2024-12-06 18:42:20.391941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.731 [2024-12-06 18:42:20.391948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.731 [2024-12-06 18:42:20.391965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.401839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.401933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.401954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.401965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.401972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.401991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.411848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.411912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.411931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.411938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.411945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.411968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.421927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.421995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.422013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.422020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.422026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.422043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.431949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.432023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.432040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.432048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.432054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.432070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.441956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.442027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.442044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.442051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.442058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.442073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.451999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.452063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.452080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.452087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.452094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.452111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.462049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.462125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.462142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.462150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.462157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.462173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.472107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.472175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.472198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.472205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.472212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.472229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.482088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.482149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.482167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.482175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.482181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.482197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.492116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.492190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.732 [2024-12-06 18:42:20.492207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.732 [2024-12-06 18:42:20.492215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.732 [2024-12-06 18:42:20.492221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.732 [2024-12-06 18:42:20.492237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-12-06 18:42:20.502162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.732 [2024-12-06 18:42:20.502228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-12-06 18:42:20.502246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-12-06 18:42:20.502265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-12-06 18:42:20.502272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.733 [2024-12-06 18:42:20.502288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.512241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.512320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.512338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.512347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.512354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.512372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.522213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.522270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.522287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.522295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.522302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.522318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.532105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.532164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.532181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.532188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.532195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.532211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.542268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.542365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.542383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.542390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.542398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.542421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.552344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.552417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.552435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.552444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.552452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.552470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.562341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.562401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.562420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.562428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.562435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.562452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.572366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.572436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.572454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.572463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.572470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.572487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.582333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.582400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.582417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.582425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.582431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.582447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.592434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.592512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.592548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.592557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.592565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.592589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.602445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.602526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.602548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.602556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.602563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.602581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.612376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.612445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.612463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.612470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.612477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.612494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.622509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.622577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.622595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.622602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-12-06 18:42:20.622609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.997 [2024-12-06 18:42:20.622625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-12-06 18:42:20.632570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-12-06 18:42:20.632644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-12-06 18:42:20.632669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-12-06 18:42:20.632676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.632683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.632700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.642588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.642650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.642668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.642676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.642682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.642699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.652585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.652664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.652682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.652689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.652696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.652713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.662622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.662700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.662718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.662725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.662732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.662748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.672690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.672757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.672774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.672782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.672794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.672810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.682663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.682763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.682781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.682789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.682796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.682812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.692699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.692774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.692791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.692798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.692805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.692821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.702741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.702805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.702824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.702831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.702838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.702854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.712816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.712932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.712949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.712957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.712963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.712980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.722763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.722820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.722838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.722846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.722853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.722869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.732829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.732893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.732910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.732917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.732924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.732940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.742850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.742915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.742933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.742940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.742946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.742963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.752898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.752960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.752977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.752984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.752990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.998 [2024-12-06 18:42:20.753006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.998 qpair failed and we were unable to recover it. 00:30:25.998 [2024-12-06 18:42:20.762912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.998 [2024-12-06 18:42:20.762990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.998 [2024-12-06 18:42:20.763013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.998 [2024-12-06 18:42:20.763020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.998 [2024-12-06 18:42:20.763026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.999 [2024-12-06 18:42:20.763043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.999 qpair failed and we were unable to recover it. 00:30:25.999 [2024-12-06 18:42:20.772978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.999 [2024-12-06 18:42:20.773045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.999 [2024-12-06 18:42:20.773063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.999 [2024-12-06 18:42:20.773070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.999 [2024-12-06 18:42:20.773076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:25.999 [2024-12-06 18:42:20.773093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:25.999 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.782993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.783108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.783129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.783139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.783147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.783164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.793053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.793123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.793142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.793150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.793156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.793173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.803046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.803108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.803126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.803136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.803149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.803167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.813123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.813232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.813251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.813259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.813265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.813282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.823028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.823107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.823124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.823132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.823139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.823155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.833086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.833157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.833180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.833189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.833196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.833214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.843165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.843228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.843247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.843255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.843261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.843278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.853119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.853201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.853219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.853227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.853234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.853250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.263 [2024-12-06 18:42:20.863225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.263 [2024-12-06 18:42:20.863292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.263 [2024-12-06 18:42:20.863311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.263 [2024-12-06 18:42:20.863320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.263 [2024-12-06 18:42:20.863330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.263 [2024-12-06 18:42:20.863349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.263 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.873284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.873368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.873385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.873393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.873399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.873416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.883260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.883322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.883340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.883348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.883354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.883372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.893281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.893364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.893382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.893390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.893396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.893413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.903378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.903443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.903464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.903472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.903478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.903495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.913415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.913495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.913513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.913521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.913528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.913545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.923397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.923454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.923473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.923480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.923486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.923503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.933389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.933487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.933506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.933520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.933527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.933544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.943507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.943574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.943592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.943600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.943606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.943623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.953520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.953593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.953610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.953617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.953624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.953646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.963507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.963574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.963604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.963612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.963618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.963646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.973540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.973611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.973629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.973644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.973652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.973674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.983574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.983644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.983663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.983671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.983677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.264 [2024-12-06 18:42:20.983693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.264 qpair failed and we were unable to recover it. 00:30:26.264 [2024-12-06 18:42:20.993666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.264 [2024-12-06 18:42:20.993770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.264 [2024-12-06 18:42:20.993787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.264 [2024-12-06 18:42:20.993795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.264 [2024-12-06 18:42:20.993801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:20.993818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-12-06 18:42:21.003610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.265 [2024-12-06 18:42:21.003673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.265 [2024-12-06 18:42:21.003692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.265 [2024-12-06 18:42:21.003699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.265 [2024-12-06 18:42:21.003706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:21.003723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-12-06 18:42:21.013673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.265 [2024-12-06 18:42:21.013744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.265 [2024-12-06 18:42:21.013762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.265 [2024-12-06 18:42:21.013769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.265 [2024-12-06 18:42:21.013775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:21.013792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-12-06 18:42:21.023733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.265 [2024-12-06 18:42:21.023811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.265 [2024-12-06 18:42:21.023829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.265 [2024-12-06 18:42:21.023837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.265 [2024-12-06 18:42:21.023843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:21.023859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-12-06 18:42:21.033752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.265 [2024-12-06 18:42:21.033841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.265 [2024-12-06 18:42:21.033858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.265 [2024-12-06 18:42:21.033866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.265 [2024-12-06 18:42:21.033872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:21.033888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.265 [2024-12-06 18:42:21.043951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.265 [2024-12-06 18:42:21.044022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.265 [2024-12-06 18:42:21.044040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.265 [2024-12-06 18:42:21.044048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.265 [2024-12-06 18:42:21.044054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.265 [2024-12-06 18:42:21.044070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.265 qpair failed and we were unable to recover it. 00:30:26.528 [2024-12-06 18:42:21.053839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.528 [2024-12-06 18:42:21.053909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.528 [2024-12-06 18:42:21.053927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.528 [2024-12-06 18:42:21.053934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.528 [2024-12-06 18:42:21.053940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.528 [2024-12-06 18:42:21.053957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.528 qpair failed and we were unable to recover it. 00:30:26.528 [2024-12-06 18:42:21.063932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.528 [2024-12-06 18:42:21.064016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.528 [2024-12-06 18:42:21.064038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.528 [2024-12-06 18:42:21.064045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.528 [2024-12-06 18:42:21.064052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.528 [2024-12-06 18:42:21.064068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.528 qpair failed and we were unable to recover it. 00:30:26.528 [2024-12-06 18:42:21.073850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.528 [2024-12-06 18:42:21.073917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.073935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.073943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.073949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.073965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.083920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.083981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.083999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.084008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.084016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.084032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.093969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.094028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.094045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.094053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.094059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.094076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.104037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.104106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.104126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.104134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.104141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.104163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.114054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.114131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.114150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.114157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.114164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.114181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.124109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.124220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.124238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.124245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.124252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.124269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.134064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.134133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.134151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.134159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.134165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.134181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.144123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.144193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.144210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.144217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.144224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.144240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.154226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.154317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.154334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.154341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.154348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.154365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.164151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.164234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.164251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.529 [2024-12-06 18:42:21.164259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.529 [2024-12-06 18:42:21.164265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.529 [2024-12-06 18:42:21.164281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.529 qpair failed and we were unable to recover it. 00:30:26.529 [2024-12-06 18:42:21.174181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.529 [2024-12-06 18:42:21.174248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.529 [2024-12-06 18:42:21.174265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.174273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.174279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.174296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.184217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.184281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.184298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.184305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.184311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.184328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.194284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.194367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.194392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.194400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.194406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.194423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.204179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.204243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.204264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.204272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.204278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.204307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.214322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.214392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.214410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.214417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.214424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.214440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.224337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.224415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.224433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.224440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.224446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.224463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.234415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.234506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.234543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.234552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.234572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.234597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.244392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.244503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.244524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.244531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.244538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.244557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.254417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.254506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.254525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.254533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.254539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.254557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.264451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.264522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.264539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.264547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.264553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.264571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.530 [2024-12-06 18:42:21.274556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.530 [2024-12-06 18:42:21.274635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.530 [2024-12-06 18:42:21.274658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.530 [2024-12-06 18:42:21.274666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.530 [2024-12-06 18:42:21.274672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.530 [2024-12-06 18:42:21.274689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.530 qpair failed and we were unable to recover it. 00:30:26.531 [2024-12-06 18:42:21.284519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.531 [2024-12-06 18:42:21.284583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.531 [2024-12-06 18:42:21.284601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.531 [2024-12-06 18:42:21.284609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.531 [2024-12-06 18:42:21.284616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.531 [2024-12-06 18:42:21.284632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.531 qpair failed and we were unable to recover it. 00:30:26.531 [2024-12-06 18:42:21.294535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.531 [2024-12-06 18:42:21.294596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.531 [2024-12-06 18:42:21.294618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.531 [2024-12-06 18:42:21.294628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.531 [2024-12-06 18:42:21.294636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.531 [2024-12-06 18:42:21.294679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.531 qpair failed and we were unable to recover it. 00:30:26.531 [2024-12-06 18:42:21.304596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.531 [2024-12-06 18:42:21.304675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.531 [2024-12-06 18:42:21.304695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.531 [2024-12-06 18:42:21.304703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.531 [2024-12-06 18:42:21.304710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.531 [2024-12-06 18:42:21.304729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.531 qpair failed and we were unable to recover it. 00:30:26.794 [2024-12-06 18:42:21.314622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.794 [2024-12-06 18:42:21.314731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.794 [2024-12-06 18:42:21.314751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.794 [2024-12-06 18:42:21.314759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.794 [2024-12-06 18:42:21.314766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.794 [2024-12-06 18:42:21.314783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.794 qpair failed and we were unable to recover it. 00:30:26.794 [2024-12-06 18:42:21.324647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.794 [2024-12-06 18:42:21.324752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.794 [2024-12-06 18:42:21.324776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.794 [2024-12-06 18:42:21.324783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.794 [2024-12-06 18:42:21.324790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.794 [2024-12-06 18:42:21.324807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.794 qpair failed and we were unable to recover it. 00:30:26.794 [2024-12-06 18:42:21.334666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.794 [2024-12-06 18:42:21.334744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.794 [2024-12-06 18:42:21.334764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.794 [2024-12-06 18:42:21.334777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.794 [2024-12-06 18:42:21.334784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.794 [2024-12-06 18:42:21.334801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.344712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.344812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.344832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.344839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.344846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.344863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.354771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.354846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.354863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.354871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.354877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.354894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.364793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.364882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.364900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.364913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.364920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.364937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.374786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.374851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.374871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.374878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.374885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.374902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.384839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.384921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.384938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.384946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.384952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.384969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.394924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.394996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.395015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.395022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.395029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.395046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.404912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.404995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.405013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.405020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.405027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.405043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.414943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.415008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.415026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.415033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.415040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.415056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.424973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.425040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.425057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.425065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.425072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.425088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.435076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.435141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.435158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.435166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.435173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.435189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.445018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.445088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.445105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.445113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.445120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.445135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.455077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.455143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.455160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.455167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.455174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.455190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.795 [2024-12-06 18:42:21.465097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.795 [2024-12-06 18:42:21.465173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.795 [2024-12-06 18:42:21.465193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.795 [2024-12-06 18:42:21.465200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.795 [2024-12-06 18:42:21.465207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.795 [2024-12-06 18:42:21.465224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.795 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.475153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.475226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.475241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.475249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.475255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.475270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.485102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.485154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.485171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.485178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.485184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.485201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.495174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.495233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.495250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.495262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.495268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.495284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.505189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.505254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.505270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.505278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.505284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.505299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.515237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.515304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.515319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.515326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.515332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.515347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.525192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.525250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.525264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.525272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.525278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.525293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.535249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.535300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.535315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.535322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.535329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.535348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.545265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.545332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.545347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.545354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.545360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.545375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.555353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.555416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.555445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.555454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.555461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.555482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.565272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.565334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.565362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.565371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.565378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.565399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:26.796 [2024-12-06 18:42:21.575369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.796 [2024-12-06 18:42:21.575433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.796 [2024-12-06 18:42:21.575460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.796 [2024-12-06 18:42:21.575469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.796 [2024-12-06 18:42:21.575476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:26.796 [2024-12-06 18:42:21.575497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.796 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.585393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.585479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.585496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.585504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.585510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.585526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.595435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.595500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.595515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.595522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.595528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.595543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.605406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.605458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.605472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.605479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.605486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.605500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.615503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.615573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.615588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.615595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.615601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.615616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.625504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.625555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.625574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.625581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.625588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.625603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.635553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.635606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.635621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.635628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.635634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.635654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.645496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.645550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.645563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.645571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.645577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.645591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.655550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.655598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.655611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.655618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.655625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.655645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.665571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.665641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.665655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.060 [2024-12-06 18:42:21.665662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.060 [2024-12-06 18:42:21.665669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.060 [2024-12-06 18:42:21.665687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.060 qpair failed and we were unable to recover it. 00:30:27.060 [2024-12-06 18:42:21.675640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.060 [2024-12-06 18:42:21.675721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.060 [2024-12-06 18:42:21.675735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.675742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.675748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.675762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.685590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.685645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.685658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.685665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.685672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.685686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.695614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.695709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.695723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.695730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.695736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.695750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.705718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.705776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.705789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.705796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.705802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.705816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.715703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.715762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.715775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.715782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.715789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.715803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.725715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.725761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.725774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.725781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.725788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.725801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.735819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.735903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.735917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.735924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.735930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.735944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.745819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.745874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.745887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.745894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.745900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.745914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.755860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.755923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.755940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.755947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.755953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.755967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.765735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.765790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.765803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.765810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.765816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.765830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.775903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.775986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.775999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.776006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.776012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.776026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.785919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.785969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.785982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.785989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.785996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.786009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.795964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.796021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.061 [2024-12-06 18:42:21.796034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.061 [2024-12-06 18:42:21.796041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.061 [2024-12-06 18:42:21.796051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.061 [2024-12-06 18:42:21.796065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.061 qpair failed and we were unable to recover it. 00:30:27.061 [2024-12-06 18:42:21.805921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.061 [2024-12-06 18:42:21.805964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.062 [2024-12-06 18:42:21.805978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.062 [2024-12-06 18:42:21.805984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.062 [2024-12-06 18:42:21.805991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.062 [2024-12-06 18:42:21.806005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.062 qpair failed and we were unable to recover it. 00:30:27.062 [2024-12-06 18:42:21.816018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.062 [2024-12-06 18:42:21.816068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.062 [2024-12-06 18:42:21.816081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.062 [2024-12-06 18:42:21.816088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.062 [2024-12-06 18:42:21.816094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.062 [2024-12-06 18:42:21.816108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.062 qpair failed and we were unable to recover it. 00:30:27.062 [2024-12-06 18:42:21.826054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.062 [2024-12-06 18:42:21.826105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.062 [2024-12-06 18:42:21.826118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.062 [2024-12-06 18:42:21.826125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.062 [2024-12-06 18:42:21.826131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.062 [2024-12-06 18:42:21.826145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.062 qpair failed and we were unable to recover it. 00:30:27.062 [2024-12-06 18:42:21.836060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.062 [2024-12-06 18:42:21.836109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.062 [2024-12-06 18:42:21.836123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.062 [2024-12-06 18:42:21.836129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.062 [2024-12-06 18:42:21.836136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.062 [2024-12-06 18:42:21.836149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.062 qpair failed and we were unable to recover it. 00:30:27.323 [2024-12-06 18:42:21.846051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.323 [2024-12-06 18:42:21.846105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.323 [2024-12-06 18:42:21.846118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.323 [2024-12-06 18:42:21.846126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.323 [2024-12-06 18:42:21.846132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.323 [2024-12-06 18:42:21.846146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.323 qpair failed and we were unable to recover it. 00:30:27.323 [2024-12-06 18:42:21.856117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.323 [2024-12-06 18:42:21.856165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.323 [2024-12-06 18:42:21.856178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.856185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.856192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.856205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.866138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.866203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.866216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.866223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.866229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.866243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.876191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.876246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.876259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.876266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.876273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.876286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.886163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.886207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.886224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.886231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.886237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.886251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.896227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.896273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.896287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.896293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.896300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.896313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.906301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.906366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.906379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.906386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.906392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.906406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.916261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.916316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.916329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.916336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.916343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.916356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.926269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.926314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.926327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.926337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.926344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.926357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.936335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.936393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.936407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.936414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.936420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.936434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.946385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.946438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.946452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.946459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.946465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.946478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.956421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.956481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.956506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.956515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.956522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.956542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.966356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.966410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.966435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.966444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.966451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.966470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.976452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.324 [2024-12-06 18:42:21.976510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.324 [2024-12-06 18:42:21.976535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.324 [2024-12-06 18:42:21.976544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.324 [2024-12-06 18:42:21.976551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.324 [2024-12-06 18:42:21.976570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.324 qpair failed and we were unable to recover it. 00:30:27.324 [2024-12-06 18:42:21.986445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:21.986521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:21.986537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:21.986544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:21.986551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:21.986566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:21.996519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:21.996604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:21.996618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:21.996625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:21.996631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:21.996650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.006489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.006536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.006549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.006556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.006562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.006576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.016549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.016601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.016615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.016622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.016628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.016645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.026603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.026663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.026677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.026692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.026699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.026713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.036600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.036663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.036676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.036683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.036690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.036704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.046593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.046646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.046660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.046667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.046673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.046688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.056663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.056715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.056728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.056739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.056746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.056760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.066707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.066761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.066774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.066781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.066788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.066801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.076745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.076831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.076844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.076851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.076857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.076872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.086670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.086725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.086739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.086746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.086752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.086767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.325 [2024-12-06 18:42:22.096763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.325 [2024-12-06 18:42:22.096825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.325 [2024-12-06 18:42:22.096839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.325 [2024-12-06 18:42:22.096846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.325 [2024-12-06 18:42:22.096852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.325 [2024-12-06 18:42:22.096870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.325 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.106804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.106860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.106874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.106881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.106887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.106902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.116842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.116933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.116946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.116953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.116959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.116973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.126800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.126843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.126856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.126863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.126870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.126883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.136876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.136923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.136936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.136943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.136949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.136963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.146902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.146993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.147007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.147014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.147020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.147034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.588 qpair failed and we were unable to recover it. 00:30:27.588 [2024-12-06 18:42:22.156930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.588 [2024-12-06 18:42:22.156987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.588 [2024-12-06 18:42:22.157000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.588 [2024-12-06 18:42:22.157007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.588 [2024-12-06 18:42:22.157013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.588 [2024-12-06 18:42:22.157027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.166941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.166987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.167001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.167008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.167015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.167029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.177024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.177118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.177132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.177139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.177145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.177159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.187027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.187085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.187102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.187109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.187115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.187129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.197066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.197120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.197134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.197141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.197148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.197161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.207038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.207091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.207104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.207111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.207117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.207131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.217061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.217116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.217129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.217135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.217141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.217155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.227117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.227169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.227182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.227189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.227199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.227213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.237170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.237223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.237236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.237243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.237250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.237263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.247144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.247190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.247203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.247210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.247216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.247230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.257185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.257237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.257250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.257256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.257263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.257276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.267121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.267175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.267188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.267195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.267201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.267215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.277162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.277231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.277244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.277251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.277258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.277271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.589 [2024-12-06 18:42:22.287240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.589 [2024-12-06 18:42:22.287290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.589 [2024-12-06 18:42:22.287304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.589 [2024-12-06 18:42:22.287311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.589 [2024-12-06 18:42:22.287317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.589 [2024-12-06 18:42:22.287331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.589 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.297308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.297363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.297377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.297384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.297390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.297404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.307233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.307291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.307304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.307311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.307317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.307331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.317386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.317441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.317458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.317465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.317471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.317485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.327351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.327409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.327434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.327443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.327450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.327469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.337431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.337488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.337513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.337521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.337529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.337548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.347485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.347565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.347580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.347587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.347594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.347609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.357514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.357600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.357614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.357621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.357631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.357650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.590 [2024-12-06 18:42:22.367361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.590 [2024-12-06 18:42:22.367408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.590 [2024-12-06 18:42:22.367421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.590 [2024-12-06 18:42:22.367428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.590 [2024-12-06 18:42:22.367434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.590 [2024-12-06 18:42:22.367449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.590 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-06 18:42:22.377562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.852 [2024-12-06 18:42:22.377624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.852 [2024-12-06 18:42:22.377641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.852 [2024-12-06 18:42:22.377649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.852 [2024-12-06 18:42:22.377655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.852 [2024-12-06 18:42:22.377669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-06 18:42:22.387583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.852 [2024-12-06 18:42:22.387642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.852 [2024-12-06 18:42:22.387655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.852 [2024-12-06 18:42:22.387662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.852 [2024-12-06 18:42:22.387669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.852 [2024-12-06 18:42:22.387683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.852 qpair failed and we were unable to recover it. 00:30:27.852 [2024-12-06 18:42:22.397610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.852 [2024-12-06 18:42:22.397669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.852 [2024-12-06 18:42:22.397683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.397689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.397696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.397710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.407595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.407650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.407663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.407670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.407677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.407691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.417657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.417706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.417720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.417727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.417733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.417747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.427663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.427734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.427747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.427754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.427760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.427774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.437706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.437765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.437778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.437786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.437792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.437806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.447711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.447760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.447776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.447784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.447790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.447804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.457752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.457809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.457822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.457830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.457836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.457850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.467817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.467872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.467885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.467892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.467898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.467911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.477750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.477843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.477856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.477863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.477870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.477883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.487822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.487878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.487891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.487901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.487907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.487921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.497900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.497951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.497964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.497971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.497977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.497991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.507927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.507978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.507991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.507998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.508005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.508018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.517971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.518029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.853 [2024-12-06 18:42:22.518042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.853 [2024-12-06 18:42:22.518049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.853 [2024-12-06 18:42:22.518056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.853 [2024-12-06 18:42:22.518069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.853 qpair failed and we were unable to recover it. 00:30:27.853 [2024-12-06 18:42:22.527823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.853 [2024-12-06 18:42:22.527912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.527925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.527932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.527938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.527952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.537971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.538029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.538042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.538049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.538055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.538069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.547917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.547970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.547983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.547990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.547996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.548010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.558053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.558112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.558125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.558132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.558138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.558152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.568002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.568047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.568061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.568068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.568074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.568087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.578083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.578158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.578172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.578179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.578185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.578199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.588127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.588182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.588196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.588203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.588209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.588222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.598153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.598222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.598235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.598242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.598248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.598262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.608138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.608186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.608200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.608206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.608213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.608227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.618211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.618270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.618283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.618293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.618300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.618313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:27.854 [2024-12-06 18:42:22.628253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.854 [2024-12-06 18:42:22.628307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.854 [2024-12-06 18:42:22.628320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.854 [2024-12-06 18:42:22.628327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.854 [2024-12-06 18:42:22.628333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:27.854 [2024-12-06 18:42:22.628348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.854 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.638243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.638297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.638310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.638317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.638324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.638337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.648249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.648298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.648312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.648319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.648325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.648339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.658315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.658373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.658398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.658406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.658414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.658439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.668332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.668393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.668419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.668428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.668434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.668454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.678397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.678460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.678485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.678494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.678501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.678521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.688239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.688303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.688317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.688325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.688332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.688346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.698389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.698459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.698473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.698480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.698486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.698501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.708458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.708510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.708524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.708531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.708537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.708551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.718508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.718577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.718591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.718598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.718605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.718619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.728350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.728401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.728414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.728421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.728428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.728442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.738483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.738531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.738545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.119 [2024-12-06 18:42:22.738552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.119 [2024-12-06 18:42:22.738559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.119 [2024-12-06 18:42:22.738574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.119 qpair failed and we were unable to recover it. 00:30:28.119 [2024-12-06 18:42:22.748564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.119 [2024-12-06 18:42:22.748618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.119 [2024-12-06 18:42:22.748635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.748646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.748653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.748667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.758595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.758653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.758667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.758674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.758680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.758694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.768586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.768681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.768695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.768702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.768709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.768723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.778597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.778670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.778684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.778691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.778697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.778711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.788685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.788739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.788752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.788758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.788768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.788782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.798719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.798774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.798788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.798795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.798802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.798816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.808688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.808760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.808774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.808781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.808787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.808802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.818761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.818850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.818863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.818870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.818877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.818891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.828714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.828784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.828799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.828806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.828812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.828832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.838839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.838893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.838907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.838914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.838920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.838934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.848808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.848857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.848871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.848878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.848884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.848898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.858838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.858891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.858905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.858912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.858918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.858932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.868807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.868862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.868876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.120 [2024-12-06 18:42:22.868883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.120 [2024-12-06 18:42:22.868889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.120 [2024-12-06 18:42:22.868903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.120 qpair failed and we were unable to recover it. 00:30:28.120 [2024-12-06 18:42:22.878954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.120 [2024-12-06 18:42:22.879051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.120 [2024-12-06 18:42:22.879067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.121 [2024-12-06 18:42:22.879074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.121 [2024-12-06 18:42:22.879081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.121 [2024-12-06 18:42:22.879095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.121 qpair failed and we were unable to recover it. 00:30:28.121 [2024-12-06 18:42:22.888938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.121 [2024-12-06 18:42:22.889030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.121 [2024-12-06 18:42:22.889043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.121 [2024-12-06 18:42:22.889050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.121 [2024-12-06 18:42:22.889056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.121 [2024-12-06 18:42:22.889071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.121 qpair failed and we were unable to recover it. 00:30:28.121 [2024-12-06 18:42:22.898916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.121 [2024-12-06 18:42:22.898980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.121 [2024-12-06 18:42:22.898995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.121 [2024-12-06 18:42:22.899002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.121 [2024-12-06 18:42:22.899009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.121 [2024-12-06 18:42:22.899024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.121 qpair failed and we were unable to recover it. 00:30:28.384 [2024-12-06 18:42:22.909002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.909060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.909074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.909081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.909087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.909101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.919074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.919156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.919170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.919177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.919187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.919202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.929024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.929074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.929087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.929094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.929101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.929115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.939065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.939181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.939195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.939202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.939208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.939222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.949121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.949219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.949232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.949239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.949245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.949260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.959172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.959223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.959237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.959244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.959251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.959264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.969139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.969187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.969200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.969207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.969214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.969228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.979046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.979093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.979106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.979113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.979120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.979133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.989232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.989285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.989298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.989305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.989311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.989325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:22.999253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:22.999310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:22.999324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:22.999331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:22.999337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:22.999351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:23.009258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:23.009308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:23.009325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:23.009332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:23.009339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:23.009353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:23.019289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:23.019340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:23.019353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:23.019360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:23.019367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.385 [2024-12-06 18:42:23.019381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.385 qpair failed and we were unable to recover it. 00:30:28.385 [2024-12-06 18:42:23.029350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.385 [2024-12-06 18:42:23.029441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.385 [2024-12-06 18:42:23.029456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.385 [2024-12-06 18:42:23.029462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.385 [2024-12-06 18:42:23.029469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.029483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.039358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.039416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.039429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.039436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.039442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.039458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.049350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.049401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.049414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.049424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.049430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.049444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.059387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.059453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.059466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.059473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.059479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.059493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.069466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.069523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.069536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.069543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.069549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.069563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.079477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.079532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.079546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.079553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.079559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.079573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.089457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.089511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.089524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.089531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.089537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.089555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.099499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.099581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.099594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.099601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.099607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.099621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.109543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.109601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.109615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.109622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.109628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.109648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.119598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.119657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.119670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.119677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.119683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.119698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.129588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.129643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.129657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.129663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.129670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.129684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.139596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.139650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.139664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.139671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.139678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.139691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.149690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.149746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.149759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.149766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.149772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.386 [2024-12-06 18:42:23.149787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.386 qpair failed and we were unable to recover it. 00:30:28.386 [2024-12-06 18:42:23.159708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.386 [2024-12-06 18:42:23.159762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.386 [2024-12-06 18:42:23.159775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.386 [2024-12-06 18:42:23.159783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.386 [2024-12-06 18:42:23.159789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.387 [2024-12-06 18:42:23.159803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.387 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.169642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.169713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.169727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.169733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.169740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.169754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.179701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.179751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.179764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.179775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.179781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.179795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.189778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.189830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.189844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.189851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.189857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.189871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.199821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.199898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.199911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.199918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.199924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.199938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.209766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.209817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.209830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.209837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.209844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.209858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.219818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.219882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.219895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.219903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.219909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.219926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.229781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.229877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.229891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.229898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.229905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.229924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.239923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.239977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.239991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.239998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.240004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.240018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.249861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.249910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.249923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.249930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.249936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.249950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.259924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.650 [2024-12-06 18:42:23.259974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.650 [2024-12-06 18:42:23.259987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.650 [2024-12-06 18:42:23.259994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.650 [2024-12-06 18:42:23.260000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.650 [2024-12-06 18:42:23.260013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.650 qpair failed and we were unable to recover it. 00:30:28.650 [2024-12-06 18:42:23.269976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.270030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.270044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.270051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.270057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.270071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.280019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.280070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.280083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.280090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.280096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.280110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.289975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.290024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.290038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.290045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.290051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.290065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.300042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.300119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.300133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.300139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.300146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.300160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.310098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.310154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.310171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.310178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.310184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.310198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.320111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.320201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.320214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.320221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.320227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.320241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.329960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.330006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.330020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.330027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.330033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.330047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.340114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.340179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.340193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.340199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.340206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.340219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.350146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.350210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.350223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.350230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.350239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.350253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.360191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.360240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.360253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.360260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.360267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.360281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.370198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.370248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.370262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.370269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.370275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.370289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.380220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.380268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.380281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.380288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.380294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.380308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.651 qpair failed and we were unable to recover it. 00:30:28.651 [2024-12-06 18:42:23.390262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.651 [2024-12-06 18:42:23.390307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.651 [2024-12-06 18:42:23.390320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.651 [2024-12-06 18:42:23.390328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.651 [2024-12-06 18:42:23.390334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.651 [2024-12-06 18:42:23.390348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.652 qpair failed and we were unable to recover it. 00:30:28.652 [2024-12-06 18:42:23.400204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.652 [2024-12-06 18:42:23.400262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.652 [2024-12-06 18:42:23.400276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.652 [2024-12-06 18:42:23.400283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.652 [2024-12-06 18:42:23.400289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.652 [2024-12-06 18:42:23.400303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.652 qpair failed and we were unable to recover it. 00:30:28.652 [2024-12-06 18:42:23.410304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.652 [2024-12-06 18:42:23.410396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.652 [2024-12-06 18:42:23.410410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.652 [2024-12-06 18:42:23.410417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.652 [2024-12-06 18:42:23.410423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.652 [2024-12-06 18:42:23.410437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.652 qpair failed and we were unable to recover it. 00:30:28.652 [2024-12-06 18:42:23.420316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.652 [2024-12-06 18:42:23.420360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.652 [2024-12-06 18:42:23.420373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.652 [2024-12-06 18:42:23.420380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.652 [2024-12-06 18:42:23.420386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.652 [2024-12-06 18:42:23.420400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.652 qpair failed and we were unable to recover it. 00:30:28.652 [2024-12-06 18:42:23.430314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.652 [2024-12-06 18:42:23.430389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.652 [2024-12-06 18:42:23.430401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.652 [2024-12-06 18:42:23.430409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.652 [2024-12-06 18:42:23.430415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.652 [2024-12-06 18:42:23.430429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.652 qpair failed and we were unable to recover it. 00:30:28.915 [2024-12-06 18:42:23.440430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.915 [2024-12-06 18:42:23.440482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.915 [2024-12-06 18:42:23.440499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.915 [2024-12-06 18:42:23.440506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.915 [2024-12-06 18:42:23.440512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.915 [2024-12-06 18:42:23.440526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.915 qpair failed and we were unable to recover it. 00:30:28.915 [2024-12-06 18:42:23.450407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.915 [2024-12-06 18:42:23.450460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.915 [2024-12-06 18:42:23.450474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.915 [2024-12-06 18:42:23.450481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.915 [2024-12-06 18:42:23.450487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.915 [2024-12-06 18:42:23.450501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.915 qpair failed and we were unable to recover it. 00:30:28.915 [2024-12-06 18:42:23.460434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.915 [2024-12-06 18:42:23.460480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.915 [2024-12-06 18:42:23.460493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.915 [2024-12-06 18:42:23.460500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.915 [2024-12-06 18:42:23.460506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.915 [2024-12-06 18:42:23.460519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.915 qpair failed and we were unable to recover it. 00:30:28.915 [2024-12-06 18:42:23.470357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.915 [2024-12-06 18:42:23.470423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.470437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.470444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.470450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.470464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.480546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.480646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.480659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.480666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.480676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.480691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.490499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.490542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.490555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.490562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.490569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.490582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.500506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.500579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.500592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.500599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.500605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.500619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.510570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.510617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.510630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.510641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.510648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.510662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.520648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.520700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.520714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.520721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.520727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.520740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.530623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.530714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.530728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.530734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.530740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.530754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.540617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.540660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.540674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.540681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.540687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.540702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.550691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.550782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.550795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.550802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.550808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.550822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.560750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.560819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.560832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.560839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.560846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.560860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.570740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.570794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.570811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.570818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.570824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.570838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.580779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.580869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.580882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.580889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.580895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.580909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.590826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.916 [2024-12-06 18:42:23.590874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.916 [2024-12-06 18:42:23.590887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.916 [2024-12-06 18:42:23.590894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.916 [2024-12-06 18:42:23.590901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.916 [2024-12-06 18:42:23.590914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.916 qpair failed and we were unable to recover it. 00:30:28.916 [2024-12-06 18:42:23.600851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.600904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.600917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.600924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.600930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.600944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.610865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.610910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.610923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.610933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.610940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.610953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.620867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.620909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.620923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.620930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.620936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.620950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.630923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.630970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.630984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.630991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.630997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.631011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.641014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.641066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.641080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.641087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.641093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.641107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.650957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.651000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.651013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.651020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.651026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.651043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.661028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.661113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.661126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.661133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.661139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.661153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.671053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.671099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.671112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.671119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.671125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.671139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.681000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.681067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.681080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.681087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.681093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.681107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:28.917 [2024-12-06 18:42:23.691076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.917 [2024-12-06 18:42:23.691124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.917 [2024-12-06 18:42:23.691138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.917 [2024-12-06 18:42:23.691145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.917 [2024-12-06 18:42:23.691151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:28.917 [2024-12-06 18:42:23.691165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.917 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.701097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.701147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.701161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.701168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.701174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.701188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.711117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.711166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.711179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.711186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.711192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.711205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.721165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.721239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.721253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.721259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.721265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.721279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.731132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.731174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.731188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.731195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.731201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.731214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.741173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.741216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.741230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.741240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.741246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.741260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.751214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.181 [2024-12-06 18:42:23.751259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.181 [2024-12-06 18:42:23.751272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.181 [2024-12-06 18:42:23.751279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.181 [2024-12-06 18:42:23.751285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.181 [2024-12-06 18:42:23.751299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.181 qpair failed and we were unable to recover it. 00:30:29.181 [2024-12-06 18:42:23.761310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.761364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.761378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.761385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.761391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.761405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.771284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.771332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.771345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.771352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.771359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.771372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.781304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.781355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.781368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.781375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.781382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.781399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.791313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.791362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.791376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.791383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.791389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.791403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.801414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.801466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.801479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.801486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.801493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.801506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.811395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.811440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.811454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.811461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.811467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.811481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.821298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.821377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.821390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.821398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.821404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.821418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.831414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.831459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.831473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.831480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.831486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.831501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.841530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.841578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.841592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.841599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.841605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.841619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.851519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.851567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.851581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.851588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.851594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.851608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.861519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.861565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.861579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.861585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.861592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.861605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.871567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.871611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.871627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.871634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.871645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.871659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.182 [2024-12-06 18:42:23.881634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.182 [2024-12-06 18:42:23.881690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.182 [2024-12-06 18:42:23.881703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.182 [2024-12-06 18:42:23.881710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.182 [2024-12-06 18:42:23.881717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.182 [2024-12-06 18:42:23.881731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.182 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.891618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.891667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.891681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.891688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.891694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.891708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.901654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.901698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.901713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.901720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.901727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.901741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.911674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.911720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.911734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.911740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.911750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.911765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.921673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.921724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.921738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.921745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.921752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.921766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.931677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.931725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.931738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.931745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.931751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.931765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.941623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.941672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.941686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.941693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.941699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.941713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.951794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.951841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.951854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.951861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.951867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.951881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.183 [2024-12-06 18:42:23.961870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.183 [2024-12-06 18:42:23.961920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.183 [2024-12-06 18:42:23.961934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.183 [2024-12-06 18:42:23.961941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.183 [2024-12-06 18:42:23.961947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.183 [2024-12-06 18:42:23.961961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.183 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:23.971812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:23.971854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:23.971867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:23.971875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:23.971881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:23.971895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:23.981867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:23.981905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:23.981915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:23.981920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:23.981924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:23.981934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:23.991904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:23.991969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:23.991979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:23.991983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:23.991988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:23.991998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.001969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.002019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.002033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.002038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.002042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.002052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.011957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.011995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.012005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.012010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.012015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.012024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.021954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.022002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.022011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.022016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.022021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.022031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.032006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.032047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.032057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.032062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.032066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.032076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.042083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.042132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.042141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.042146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.042153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.042164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.052038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.446 [2024-12-06 18:42:24.052076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.446 [2024-12-06 18:42:24.052086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.446 [2024-12-06 18:42:24.052091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.446 [2024-12-06 18:42:24.052095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.446 [2024-12-06 18:42:24.052105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.446 qpair failed and we were unable to recover it. 00:30:29.446 [2024-12-06 18:42:24.062075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.062143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.062153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.062158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.062162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63bc000b90 00:30:29.447 [2024-12-06 18:42:24.062172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.072147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.072262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.072328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.072353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.072374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63b0000b90 00:30:29.447 [2024-12-06 18:42:24.072428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.082184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.082318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.082365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.082384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.082399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63b0000b90 00:30:29.447 [2024-12-06 18:42:24.082439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.082902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a4e10 is same with the state(6) to be set 00:30:29.447 [2024-12-06 18:42:24.092158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.092267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.092331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.092356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.092377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63b4000b90 00:30:29.447 [2024-12-06 18:42:24.092432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.102179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.102272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.102319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.102337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.102352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f63b4000b90 00:30:29.447 [2024-12-06 18:42:24.102393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.112231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.112331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.112402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.112427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.112449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11af0c0 00:30:29.447 [2024-12-06 18:42:24.112502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.122294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.447 [2024-12-06 18:42:24.122383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.447 [2024-12-06 18:42:24.122435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.447 [2024-12-06 18:42:24.122454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.447 [2024-12-06 18:42:24.122469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11af0c0 00:30:29.447 [2024-12-06 18:42:24.122509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:29.447 qpair failed and we were unable to recover it. 00:30:29.447 [2024-12-06 18:42:24.123026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a4e10 (9): Bad file descriptor 00:30:29.447 Initializing NVMe Controllers 00:30:29.447 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:29.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:29.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:29.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:29.447 Initialization complete. Launching workers. 00:30:29.447 Starting thread on core 1 00:30:29.447 Starting thread on core 2 00:30:29.447 Starting thread on core 3 00:30:29.447 Starting thread on core 0 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:29.447 00:30:29.447 real 0m11.403s 00:30:29.447 user 0m21.831s 00:30:29.447 sys 0m4.021s 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:29.447 ************************************ 00:30:29.447 END TEST nvmf_target_disconnect_tc2 00:30:29.447 ************************************ 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.447 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.447 rmmod nvme_tcp 00:30:29.447 rmmod nvme_fabrics 00:30:29.709 rmmod nvme_keyring 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2326140 ']' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2326140 ']' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2326140' 00:30:29.709 killing process with pid 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2326140 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.709 18:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.256 00:30:32.256 real 0m21.902s 00:30:32.256 user 0m49.467s 00:30:32.256 sys 0m10.340s 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.256 ************************************ 00:30:32.256 END TEST nvmf_target_disconnect 00:30:32.256 ************************************ 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:32.256 00:30:32.256 real 6m31.225s 00:30:32.256 user 11m18.687s 00:30:32.256 sys 2m15.318s 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.256 18:42:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:32.256 ************************************ 00:30:32.256 END TEST nvmf_host 00:30:32.256 ************************************ 00:30:32.256 18:42:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:32.256 18:42:26 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:32.256 18:42:26 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:32.256 18:42:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.256 18:42:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.256 18:42:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:32.256 ************************************ 00:30:32.256 START TEST nvmf_target_core_interrupt_mode 00:30:32.256 ************************************ 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:32.256 * Looking for test storage... 00:30:32.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:32.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.256 --rc genhtml_branch_coverage=1 00:30:32.256 --rc genhtml_function_coverage=1 00:30:32.256 --rc genhtml_legend=1 00:30:32.256 --rc geninfo_all_blocks=1 00:30:32.256 --rc geninfo_unexecuted_blocks=1 00:30:32.256 00:30:32.256 ' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:32.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.256 --rc genhtml_branch_coverage=1 00:30:32.256 --rc genhtml_function_coverage=1 00:30:32.256 --rc genhtml_legend=1 00:30:32.256 --rc geninfo_all_blocks=1 00:30:32.256 --rc geninfo_unexecuted_blocks=1 00:30:32.256 00:30:32.256 ' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:32.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.256 --rc genhtml_branch_coverage=1 00:30:32.256 --rc genhtml_function_coverage=1 00:30:32.256 --rc genhtml_legend=1 00:30:32.256 --rc geninfo_all_blocks=1 00:30:32.256 --rc geninfo_unexecuted_blocks=1 00:30:32.256 00:30:32.256 ' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:32.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.256 --rc genhtml_branch_coverage=1 00:30:32.256 --rc genhtml_function_coverage=1 00:30:32.256 --rc genhtml_legend=1 00:30:32.256 --rc geninfo_all_blocks=1 00:30:32.256 --rc geninfo_unexecuted_blocks=1 00:30:32.256 00:30:32.256 ' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.256 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:32.257 ************************************ 00:30:32.257 START TEST nvmf_abort 00:30:32.257 ************************************ 00:30:32.257 18:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:32.257 * Looking for test storage... 00:30:32.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:32.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.520 --rc genhtml_branch_coverage=1 00:30:32.520 --rc genhtml_function_coverage=1 00:30:32.520 --rc genhtml_legend=1 00:30:32.520 --rc geninfo_all_blocks=1 00:30:32.520 --rc geninfo_unexecuted_blocks=1 00:30:32.520 00:30:32.520 ' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:32.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.520 --rc genhtml_branch_coverage=1 00:30:32.520 --rc genhtml_function_coverage=1 00:30:32.520 --rc genhtml_legend=1 00:30:32.520 --rc geninfo_all_blocks=1 00:30:32.520 --rc geninfo_unexecuted_blocks=1 00:30:32.520 00:30:32.520 ' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:32.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.520 --rc genhtml_branch_coverage=1 00:30:32.520 --rc genhtml_function_coverage=1 00:30:32.520 --rc genhtml_legend=1 00:30:32.520 --rc geninfo_all_blocks=1 00:30:32.520 --rc geninfo_unexecuted_blocks=1 00:30:32.520 00:30:32.520 ' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:32.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:32.520 --rc genhtml_branch_coverage=1 00:30:32.520 --rc genhtml_function_coverage=1 00:30:32.520 --rc genhtml_legend=1 00:30:32.520 --rc geninfo_all_blocks=1 00:30:32.520 --rc geninfo_unexecuted_blocks=1 00:30:32.520 00:30:32.520 ' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.520 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.521 18:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:40.664 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.664 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:40.665 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:40.665 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:40.665 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:30:40.665 00:30:40.665 --- 10.0.0.2 ping statistics --- 00:30:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.665 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:30:40.665 00:30:40.665 --- 10.0.0.1 ping statistics --- 00:30:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.665 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2331630 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2331630 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2331630 ']' 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.665 18:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.665 [2024-12-06 18:42:34.714545] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.665 [2024-12-06 18:42:34.715699] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:30:40.665 [2024-12-06 18:42:34.715751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.665 [2024-12-06 18:42:34.813591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.665 [2024-12-06 18:42:34.864306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.665 [2024-12-06 18:42:34.864359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.666 [2024-12-06 18:42:34.864368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.666 [2024-12-06 18:42:34.864376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.666 [2024-12-06 18:42:34.864382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.666 [2024-12-06 18:42:34.866203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.666 [2024-12-06 18:42:34.866362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.666 [2024-12-06 18:42:34.866364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.666 [2024-12-06 18:42:34.944334] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:40.666 [2024-12-06 18:42:34.945416] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:40.666 [2024-12-06 18:42:34.945739] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:40.666 [2024-12-06 18:42:34.945892] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:40.927 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.927 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:40.927 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:40.927 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.927 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 [2024-12-06 18:42:35.583255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 Malloc0 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 Delay0 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 [2024-12-06 18:42:35.683177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.928 18:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:41.190 [2024-12-06 18:42:35.826473] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:43.743 Initializing NVMe Controllers 00:30:43.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:43.743 controller IO queue size 128 less than required 00:30:43.743 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:43.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:43.743 Initialization complete. Launching workers. 00:30:43.743 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28653 00:30:43.743 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28714, failed to submit 66 00:30:43.743 success 28653, unsuccessful 61, failed 0 00:30:43.743 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.743 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.743 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:43.743 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.743 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.744 rmmod nvme_tcp 00:30:43.744 rmmod nvme_fabrics 00:30:43.744 rmmod nvme_keyring 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2331630 ']' 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2331630 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2331630 ']' 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2331630 00:30:43.744 18:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2331630 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2331630' 00:30:43.744 killing process with pid 2331630 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2331630 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2331630 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.744 18:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.660 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.660 00:30:45.660 real 0m13.394s 00:30:45.660 user 0m11.002s 00:30:45.660 sys 0m6.934s 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.661 ************************************ 00:30:45.661 END TEST nvmf_abort 00:30:45.661 ************************************ 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.661 ************************************ 00:30:45.661 START TEST nvmf_ns_hotplug_stress 00:30:45.661 ************************************ 00:30:45.661 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:45.922 * Looking for test storage... 00:30:45.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:45.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.922 --rc genhtml_branch_coverage=1 00:30:45.922 --rc genhtml_function_coverage=1 00:30:45.922 --rc genhtml_legend=1 00:30:45.922 --rc geninfo_all_blocks=1 00:30:45.922 --rc geninfo_unexecuted_blocks=1 00:30:45.922 00:30:45.922 ' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:45.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.922 --rc genhtml_branch_coverage=1 00:30:45.922 --rc genhtml_function_coverage=1 00:30:45.922 --rc genhtml_legend=1 00:30:45.922 --rc geninfo_all_blocks=1 00:30:45.922 --rc geninfo_unexecuted_blocks=1 00:30:45.922 00:30:45.922 ' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:45.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.922 --rc genhtml_branch_coverage=1 00:30:45.922 --rc genhtml_function_coverage=1 00:30:45.922 --rc genhtml_legend=1 00:30:45.922 --rc geninfo_all_blocks=1 00:30:45.922 --rc geninfo_unexecuted_blocks=1 00:30:45.922 00:30:45.922 ' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:45.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.922 --rc genhtml_branch_coverage=1 00:30:45.922 --rc genhtml_function_coverage=1 00:30:45.922 --rc genhtml_legend=1 00:30:45.922 --rc geninfo_all_blocks=1 00:30:45.922 --rc geninfo_unexecuted_blocks=1 00:30:45.922 00:30:45.922 ' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.922 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.923 18:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:54.070 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:54.070 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.070 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:54.071 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:54.071 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:30:54.071 00:30:54.071 --- 10.0.0.2 ping statistics --- 00:30:54.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.071 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:30:54.071 00:30:54.071 --- 10.0.0.1 ping statistics --- 00:30:54.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.071 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.071 18:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2336324 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2336324 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2336324 ']' 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.071 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:54.071 [2024-12-06 18:42:48.106572] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.071 [2024-12-06 18:42:48.107708] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:30:54.071 [2024-12-06 18:42:48.107762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.071 [2024-12-06 18:42:48.206985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:54.071 [2024-12-06 18:42:48.258450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.071 [2024-12-06 18:42:48.258502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.071 [2024-12-06 18:42:48.258511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.071 [2024-12-06 18:42:48.258518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.071 [2024-12-06 18:42:48.258524] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.071 [2024-12-06 18:42:48.260327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:54.071 [2024-12-06 18:42:48.260485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.071 [2024-12-06 18:42:48.260486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:54.071 [2024-12-06 18:42:48.339477] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.071 [2024-12-06 18:42:48.340575] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:54.071 [2024-12-06 18:42:48.340947] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.071 [2024-12-06 18:42:48.341114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:54.334 18:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:54.596 [2024-12-06 18:42:49.129362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.596 18:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:54.596 18:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.857 [2024-12-06 18:42:49.514021] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.857 18:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:55.118 18:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:55.379 Malloc0 00:30:55.379 18:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:55.379 Delay0 00:30:55.379 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.642 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:55.904 NULL1 00:30:55.904 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:56.165 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2337008 00:30:56.165 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:56.165 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:56.165 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.165 18:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.426 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:56.426 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:56.688 true 00:30:56.688 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:56.688 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:56.948 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.948 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:56.948 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:57.207 true 00:30:57.207 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:57.207 18:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.467 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.726 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:57.726 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:57.986 true 00:30:57.986 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:57.986 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.986 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.246 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:58.246 18:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:58.505 true 00:30:58.505 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:58.505 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.766 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.766 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:58.766 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:59.026 true 00:30:59.026 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:59.026 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.285 18:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.546 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:59.546 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:59.546 true 00:30:59.546 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:30:59.546 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.806 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.065 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:00.065 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:00.065 true 00:31:00.325 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:00.325 18:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.325 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.584 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:00.584 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:00.843 true 00:31:00.843 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:00.843 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.843 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.102 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:01.102 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:01.362 true 00:31:01.362 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:01.362 18:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.621 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.621 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:01.621 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:01.881 true 00:31:01.881 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:01.881 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.140 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.140 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:02.140 18:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:02.465 true 00:31:02.465 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:02.465 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.755 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.755 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:02.755 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:03.015 true 00:31:03.015 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:03.015 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.276 18:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.276 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:03.276 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:03.536 true 00:31:03.536 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:03.536 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.797 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.057 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:04.057 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:04.057 true 00:31:04.057 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:04.057 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.319 18:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.580 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:04.580 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:04.580 true 00:31:04.580 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:04.580 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.840 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.101 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:05.101 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:05.101 true 00:31:05.363 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:05.363 18:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.363 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.624 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:05.624 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:05.884 true 00:31:05.884 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:05.884 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.884 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.145 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:06.145 18:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:06.407 true 00:31:06.407 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:06.407 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.668 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.668 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:06.668 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:06.929 true 00:31:06.929 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:06.929 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.189 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.189 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:07.189 18:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:07.449 true 00:31:07.449 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:07.449 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.709 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.970 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:07.970 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:07.970 true 00:31:07.970 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:07.970 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.231 18:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.491 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:08.491 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:08.491 true 00:31:08.491 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:08.491 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.751 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.011 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:09.011 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:09.273 true 00:31:09.273 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:09.273 18:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.273 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.534 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:09.534 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:09.794 true 00:31:09.794 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:09.794 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.054 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.054 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:10.054 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:10.314 true 00:31:10.315 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:10.315 18:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.574 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.574 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:10.574 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:10.834 true 00:31:10.834 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:10.834 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.094 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.354 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:11.354 18:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:11.354 true 00:31:11.354 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:11.354 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.615 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.876 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:11.876 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:11.876 true 00:31:11.876 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:11.876 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.137 18:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.397 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:12.397 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:12.397 true 00:31:12.658 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:12.658 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.658 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.918 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:12.919 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:13.179 true 00:31:13.179 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:13.179 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.179 18:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.439 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:13.439 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:13.699 true 00:31:13.700 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:13.700 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.960 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.960 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:13.960 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:14.220 true 00:31:14.220 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:14.220 18:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.500 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.500 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:14.500 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:14.768 true 00:31:14.768 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:14.768 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.029 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.290 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:15.290 18:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:15.290 true 00:31:15.290 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:15.290 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.550 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.811 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:15.811 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:15.811 true 00:31:15.811 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:15.811 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.071 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.329 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:16.329 18:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:16.589 true 00:31:16.589 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:16.589 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.589 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.850 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:16.850 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:17.111 true 00:31:17.111 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:17.111 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.371 18:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.371 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:17.371 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:17.660 true 00:31:17.660 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:17.660 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.920 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.920 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:17.920 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:18.180 true 00:31:18.180 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:18.180 18:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.439 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.439 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:18.439 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:18.699 true 00:31:18.699 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:18.699 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.960 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.220 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:19.220 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:19.220 true 00:31:19.220 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:19.220 18:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.479 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.740 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:19.740 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:19.740 true 00:31:19.740 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:19.740 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.001 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.261 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:20.261 18:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:20.523 true 00:31:20.523 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:20.523 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.523 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.782 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:20.782 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:21.042 true 00:31:21.042 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:21.042 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.042 18:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.302 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:21.302 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:21.562 true 00:31:21.562 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:21.562 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.822 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.822 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:21.822 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:22.083 true 00:31:22.083 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:22.083 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.345 18:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.345 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:22.345 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:22.606 true 00:31:22.606 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:22.606 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.866 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.127 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:23.127 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:23.127 true 00:31:23.127 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:23.127 18:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.389 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.651 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:23.651 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:23.651 true 00:31:23.914 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:23.914 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.915 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.175 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:24.175 18:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:24.436 true 00:31:24.436 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:24.436 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.436 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.698 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:24.698 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:24.960 true 00:31:24.960 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:24.960 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.221 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.221 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:25.221 18:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:25.482 true 00:31:25.482 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:25.482 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.743 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.743 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:25.743 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:26.005 true 00:31:26.005 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:26.005 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.266 18:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.266 Initializing NVMe Controllers 00:31:26.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:26.266 Controller IO queue size 128, less than required. 00:31:26.266 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:26.266 Initialization complete. Launching workers. 00:31:26.266 ======================================================== 00:31:26.266 Latency(us) 00:31:26.266 Device Information : IOPS MiB/s Average min max 00:31:26.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30307.68 14.80 4223.28 1110.07 11993.26 00:31:26.266 ======================================================== 00:31:26.266 Total : 30307.68 14.80 4223.28 1110.07 11993.26 00:31:26.266 00:31:26.526 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:31:26.526 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:31:26.526 true 00:31:26.526 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2337008 00:31:26.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2337008) - No such process 00:31:26.527 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2337008 00:31:26.527 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.788 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:27.049 null0 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.049 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:27.311 null1 00:31:27.311 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:27.311 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.311 18:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:27.573 null2 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:27.573 null3 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.573 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:27.834 null4 00:31:27.834 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:27.834 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:27.834 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:28.095 null5 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:28.095 null6 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:28.095 18:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:28.357 null7 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.357 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2343186 2343187 2343190 2343191 2343193 2343195 2343197 2343199 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.358 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.621 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.883 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.884 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.146 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.408 18:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.408 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.670 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.931 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.192 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.193 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.463 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.463 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.463 18:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.463 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.728 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.989 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.249 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.250 18:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.250 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.250 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.250 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.250 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.511 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.772 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.031 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.290 18:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.290 rmmod nvme_tcp 00:31:32.290 rmmod nvme_fabrics 00:31:32.290 rmmod nvme_keyring 00:31:32.290 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.290 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:32.290 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:32.290 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2336324 ']' 00:31:32.290 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2336324 00:31:32.291 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2336324 ']' 00:31:32.291 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2336324 00:31:32.291 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:32.291 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336324 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336324' 00:31:32.551 killing process with pid 2336324 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2336324 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2336324 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.551 18:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.098 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:35.098 00:31:35.098 real 0m48.950s 00:31:35.098 user 3m2.793s 00:31:35.098 sys 0m21.819s 00:31:35.098 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:35.099 ************************************ 00:31:35.099 END TEST nvmf_ns_hotplug_stress 00:31:35.099 ************************************ 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:35.099 ************************************ 00:31:35.099 START TEST nvmf_delete_subsystem 00:31:35.099 ************************************ 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:35.099 * Looking for test storage... 00:31:35.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.099 --rc genhtml_branch_coverage=1 00:31:35.099 --rc genhtml_function_coverage=1 00:31:35.099 --rc genhtml_legend=1 00:31:35.099 --rc geninfo_all_blocks=1 00:31:35.099 --rc geninfo_unexecuted_blocks=1 00:31:35.099 00:31:35.099 ' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.099 --rc genhtml_branch_coverage=1 00:31:35.099 --rc genhtml_function_coverage=1 00:31:35.099 --rc genhtml_legend=1 00:31:35.099 --rc geninfo_all_blocks=1 00:31:35.099 --rc geninfo_unexecuted_blocks=1 00:31:35.099 00:31:35.099 ' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.099 --rc genhtml_branch_coverage=1 00:31:35.099 --rc genhtml_function_coverage=1 00:31:35.099 --rc genhtml_legend=1 00:31:35.099 --rc geninfo_all_blocks=1 00:31:35.099 --rc geninfo_unexecuted_blocks=1 00:31:35.099 00:31:35.099 ' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:35.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.099 --rc genhtml_branch_coverage=1 00:31:35.099 --rc genhtml_function_coverage=1 00:31:35.099 --rc genhtml_legend=1 00:31:35.099 --rc geninfo_all_blocks=1 00:31:35.099 --rc geninfo_unexecuted_blocks=1 00:31:35.099 00:31:35.099 ' 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.099 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.100 18:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:43.261 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:43.262 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:43.262 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:43.262 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:43.262 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:31:43.262 00:31:43.262 --- 10.0.0.2 ping statistics --- 00:31:43.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.262 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:31:43.262 00:31:43.262 --- 10.0.0.1 ping statistics --- 00:31:43.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.262 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:43.262 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.263 18:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2348336 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2348336 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2348336 ']' 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 [2024-12-06 18:43:37.109699] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:43.263 [2024-12-06 18:43:37.110835] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:31:43.263 [2024-12-06 18:43:37.110884] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.263 [2024-12-06 18:43:37.185375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:43.263 [2024-12-06 18:43:37.230793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.263 [2024-12-06 18:43:37.230847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.263 [2024-12-06 18:43:37.230854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.263 [2024-12-06 18:43:37.230859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.263 [2024-12-06 18:43:37.230864] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.263 [2024-12-06 18:43:37.232360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.263 [2024-12-06 18:43:37.232362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.263 [2024-12-06 18:43:37.305866] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:43.263 [2024-12-06 18:43:37.306145] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:43.263 [2024-12-06 18:43:37.306539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 [2024-12-06 18:43:37.389341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 [2024-12-06 18:43:37.417603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 NULL1 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 Delay0 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2348366 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:43.263 18:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:43.263 [2024-12-06 18:43:37.538940] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:45.175 18:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.175 18:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.175 18:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 [2024-12-06 18:43:39.716333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2062680 is same with the state(6) to be set 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Write completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 starting I/O failed: -6 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.175 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 starting I/O failed: -6 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 [2024-12-06 18:43:39.721121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f853000d020 is same with the state(6) to be set 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Write completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:45.176 Read completed with error (sct=0, sc=8) 00:31:46.118 [2024-12-06 18:43:40.682043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20639b0 is same with the state(6) to be set 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 [2024-12-06 18:43:40.719989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2062860 is same with the state(6) to be set 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 [2024-12-06 18:43:40.720389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20624a0 is same with the state(6) to be set 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 [2024-12-06 18:43:40.723444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f853000d350 is same with the state(6) to be set 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Write completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.118 Read completed with error (sct=0, sc=8) 00:31:46.119 Read completed with error (sct=0, sc=8) 00:31:46.119 Write completed with error (sct=0, sc=8) 00:31:46.119 Write completed with error (sct=0, sc=8) 00:31:46.119 Read completed with error (sct=0, sc=8) 00:31:46.119 Write completed with error (sct=0, sc=8) 00:31:46.119 Read completed with error (sct=0, sc=8) 00:31:46.119 [2024-12-06 18:43:40.723774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8530000c40 is same with the state(6) to be set 00:31:46.119 Initializing NVMe Controllers 00:31:46.119 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.119 Controller IO queue size 128, less than required. 00:31:46.119 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:46.119 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:46.119 Initialization complete. Launching workers. 00:31:46.119 ======================================================== 00:31:46.119 Latency(us) 00:31:46.119 Device Information : IOPS MiB/s Average min max 00:31:46.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.25 0.08 914378.96 354.39 1007609.52 00:31:46.119 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.24 0.08 913044.17 345.61 1011363.72 00:31:46.119 ======================================================== 00:31:46.119 Total : 323.49 0.16 913709.52 345.61 1011363.72 00:31:46.119 00:31:46.119 [2024-12-06 18:43:40.724253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20639b0 (9): Bad file descriptor 00:31:46.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:46.119 18:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.119 18:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:46.119 18:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2348366 00:31:46.119 18:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2348366 00:31:46.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2348366) - No such process 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2348366 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2348366 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2348366 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.693 [2024-12-06 18:43:41.257616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2349033 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:46.693 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:46.693 [2024-12-06 18:43:41.358287] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:47.264 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:47.264 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:47.264 18:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:47.525 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:47.525 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:47.525 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:48.096 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:48.096 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:48.096 18:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:48.667 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:48.667 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:48.667 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:49.238 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:49.238 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:49.238 18:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:49.821 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:49.821 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:49.821 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:49.821 Initializing NVMe Controllers 00:31:49.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.821 Controller IO queue size 128, less than required. 00:31:49.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:49.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:49.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:49.821 Initialization complete. Launching workers. 00:31:49.821 ======================================================== 00:31:49.821 Latency(us) 00:31:49.821 Device Information : IOPS MiB/s Average min max 00:31:49.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002459.65 1000203.42 1006608.55 00:31:49.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004528.94 1000194.51 1012330.54 00:31:49.821 ======================================================== 00:31:49.821 Total : 256.00 0.12 1003494.30 1000194.51 1012330.54 00:31:49.821 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2349033 00:31:50.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2349033) - No such process 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2349033 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.150 rmmod nvme_tcp 00:31:50.150 rmmod nvme_fabrics 00:31:50.150 rmmod nvme_keyring 00:31:50.150 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2348336 ']' 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2348336 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2348336 ']' 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2348336 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.151 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348336 00:31:50.429 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.429 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.429 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348336' 00:31:50.429 killing process with pid 2348336 00:31:50.429 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2348336 00:31:50.429 18:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2348336 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.430 18:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.342 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.342 00:31:52.342 real 0m17.671s 00:31:52.342 user 0m26.559s 00:31:52.342 sys 0m7.199s 00:31:52.342 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.342 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:52.342 ************************************ 00:31:52.342 END TEST nvmf_delete_subsystem 00:31:52.342 ************************************ 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.605 ************************************ 00:31:52.605 START TEST nvmf_host_management 00:31:52.605 ************************************ 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:52.605 * Looking for test storage... 00:31:52.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:31:52.605 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.867 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:52.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.868 --rc genhtml_branch_coverage=1 00:31:52.868 --rc genhtml_function_coverage=1 00:31:52.868 --rc genhtml_legend=1 00:31:52.868 --rc geninfo_all_blocks=1 00:31:52.868 --rc geninfo_unexecuted_blocks=1 00:31:52.868 00:31:52.868 ' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:52.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.868 --rc genhtml_branch_coverage=1 00:31:52.868 --rc genhtml_function_coverage=1 00:31:52.868 --rc genhtml_legend=1 00:31:52.868 --rc geninfo_all_blocks=1 00:31:52.868 --rc geninfo_unexecuted_blocks=1 00:31:52.868 00:31:52.868 ' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:52.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.868 --rc genhtml_branch_coverage=1 00:31:52.868 --rc genhtml_function_coverage=1 00:31:52.868 --rc genhtml_legend=1 00:31:52.868 --rc geninfo_all_blocks=1 00:31:52.868 --rc geninfo_unexecuted_blocks=1 00:31:52.868 00:31:52.868 ' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:52.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.868 --rc genhtml_branch_coverage=1 00:31:52.868 --rc genhtml_function_coverage=1 00:31:52.868 --rc genhtml_legend=1 00:31:52.868 --rc geninfo_all_blocks=1 00:31:52.868 --rc geninfo_unexecuted_blocks=1 00:31:52.868 00:31:52.868 ' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.868 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.869 18:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:01.011 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:01.012 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:01.012 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:01.012 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:01.012 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:01.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:01.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:32:01.012 00:32:01.012 --- 10.0.0.2 ping statistics --- 00:32:01.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.012 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:32:01.012 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:01.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:01.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:01.012 00:32:01.013 --- 10.0.0.1 ping statistics --- 00:32:01.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:01.013 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2353968 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2353968 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2353968 ']' 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.013 18:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.013 [2024-12-06 18:43:54.895909] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:01.013 [2024-12-06 18:43:54.897030] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:01.013 [2024-12-06 18:43:54.897080] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:01.013 [2024-12-06 18:43:54.997807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:01.013 [2024-12-06 18:43:55.051383] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:01.013 [2024-12-06 18:43:55.051439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:01.013 [2024-12-06 18:43:55.051447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:01.013 [2024-12-06 18:43:55.051455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:01.013 [2024-12-06 18:43:55.051461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:01.013 [2024-12-06 18:43:55.053500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.013 [2024-12-06 18:43:55.053682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:01.013 [2024-12-06 18:43:55.053815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.013 [2024-12-06 18:43:55.053815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:01.013 [2024-12-06 18:43:55.133171] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:01.013 [2024-12-06 18:43:55.134217] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:01.013 [2024-12-06 18:43:55.134590] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:01.013 [2024-12-06 18:43:55.135084] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:01.013 [2024-12-06 18:43:55.135147] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.013 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.013 [2024-12-06 18:43:55.774697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.273 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.273 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:01.273 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.273 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.273 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.274 Malloc0 00:32:01.274 [2024-12-06 18:43:55.874916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2354095 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2354095 /var/tmp/bdevperf.sock 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2354095 ']' 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:01.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:01.274 { 00:32:01.274 "params": { 00:32:01.274 "name": "Nvme$subsystem", 00:32:01.274 "trtype": "$TEST_TRANSPORT", 00:32:01.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:01.274 "adrfam": "ipv4", 00:32:01.274 "trsvcid": "$NVMF_PORT", 00:32:01.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:01.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:01.274 "hdgst": ${hdgst:-false}, 00:32:01.274 "ddgst": ${ddgst:-false} 00:32:01.274 }, 00:32:01.274 "method": "bdev_nvme_attach_controller" 00:32:01.274 } 00:32:01.274 EOF 00:32:01.274 )") 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:01.274 18:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:01.274 "params": { 00:32:01.274 "name": "Nvme0", 00:32:01.274 "trtype": "tcp", 00:32:01.274 "traddr": "10.0.0.2", 00:32:01.274 "adrfam": "ipv4", 00:32:01.274 "trsvcid": "4420", 00:32:01.274 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:01.274 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:01.274 "hdgst": false, 00:32:01.274 "ddgst": false 00:32:01.274 }, 00:32:01.274 "method": "bdev_nvme_attach_controller" 00:32:01.274 }' 00:32:01.274 [2024-12-06 18:43:55.983606] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:01.274 [2024-12-06 18:43:55.983683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354095 ] 00:32:01.535 [2024-12-06 18:43:56.078360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.535 [2024-12-06 18:43:56.132851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.535 Running I/O for 10 seconds... 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=711 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 711 -ge 100 ']' 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.108 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:02.108 [2024-12-06 18:43:56.874356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116ce20 is same with the state(6) to be set 00:32:02.108 [2024-12-06 18:43:56.874755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.874971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.874979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.108 [2024-12-06 18:43:56.875627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.108 [2024-12-06 18:43:56.875636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.875986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.875994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.876004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:02.109 [2024-12-06 18:43:56.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.876021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121baf0 is same with the state(6) to be set 00:32:02.109 [2024-12-06 18:43:56.877320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:02.109 task offset: 103040 on job bdev=Nvme0n1 fails 00:32:02.109 00:32:02.109 Latency(us) 00:32:02.109 [2024-12-06T17:43:56.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.109 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:02.109 Job: Nvme0n1 ended in about 0.58 seconds with error 00:32:02.109 Verification LBA range: start 0x0 length 0x400 00:32:02.109 Nvme0n1 : 0.58 1338.01 83.63 110.92 0.00 43132.25 2075.31 37355.52 00:32:02.109 [2024-12-06T17:43:56.893Z] =================================================================================================================== 00:32:02.109 [2024-12-06T17:43:56.893Z] Total : 1338.01 83.63 110.92 0.00 43132.25 2075.31 37355.52 00:32:02.109 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.109 [2024-12-06 18:43:56.879574] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:02.109 [2024-12-06 18:43:56.879619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1002c20 (9): Bad file descriptor 00:32:02.109 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:02.109 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.109 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:02.109 [2024-12-06 18:43:56.881028] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:02.109 [2024-12-06 18:43:56.881141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:02.109 [2024-12-06 18:43:56.881173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:02.109 [2024-12-06 18:43:56.881188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:02.109 [2024-12-06 18:43:56.881204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:02.109 [2024-12-06 18:43:56.881212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:02.109 [2024-12-06 18:43:56.881220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1002c20 00:32:02.109 [2024-12-06 18:43:56.881242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1002c20 (9): Bad file descriptor 00:32:02.109 [2024-12-06 18:43:56.881258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:02.109 [2024-12-06 18:43:56.881266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:02.109 [2024-12-06 18:43:56.881276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:02.109 [2024-12-06 18:43:56.881288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:02.369 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.370 18:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2354095 00:32:03.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2354095) - No such process 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:03.312 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:03.312 { 00:32:03.312 "params": { 00:32:03.312 "name": "Nvme$subsystem", 00:32:03.312 "trtype": "$TEST_TRANSPORT", 00:32:03.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:03.312 "adrfam": "ipv4", 00:32:03.312 "trsvcid": "$NVMF_PORT", 00:32:03.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:03.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:03.313 "hdgst": ${hdgst:-false}, 00:32:03.313 "ddgst": ${ddgst:-false} 00:32:03.313 }, 00:32:03.313 "method": "bdev_nvme_attach_controller" 00:32:03.313 } 00:32:03.313 EOF 00:32:03.313 )") 00:32:03.313 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:03.313 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:03.313 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:03.313 18:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:03.313 "params": { 00:32:03.313 "name": "Nvme0", 00:32:03.313 "trtype": "tcp", 00:32:03.313 "traddr": "10.0.0.2", 00:32:03.313 "adrfam": "ipv4", 00:32:03.313 "trsvcid": "4420", 00:32:03.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:03.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:03.313 "hdgst": false, 00:32:03.313 "ddgst": false 00:32:03.313 }, 00:32:03.313 "method": "bdev_nvme_attach_controller" 00:32:03.313 }' 00:32:03.313 [2024-12-06 18:43:57.963702] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:03.313 [2024-12-06 18:43:57.963786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2354463 ] 00:32:03.313 [2024-12-06 18:43:58.056202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.574 [2024-12-06 18:43:58.108552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.574 Running I/O for 1 seconds... 00:32:04.960 1697.00 IOPS, 106.06 MiB/s 00:32:04.960 Latency(us) 00:32:04.960 [2024-12-06T17:43:59.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.960 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:04.960 Verification LBA range: start 0x0 length 0x400 00:32:04.960 Nvme0n1 : 1.01 1745.22 109.08 0.00 0.00 35907.37 2007.04 38010.88 00:32:04.960 [2024-12-06T17:43:59.744Z] =================================================================================================================== 00:32:04.960 [2024-12-06T17:43:59.744Z] Total : 1745.22 109.08 0.00 0.00 35907.37 2007.04 38010.88 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.960 rmmod nvme_tcp 00:32:04.960 rmmod nvme_fabrics 00:32:04.960 rmmod nvme_keyring 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2353968 ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2353968 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2353968 ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2353968 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353968 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353968' 00:32:04.960 killing process with pid 2353968 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2353968 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2353968 00:32:04.960 [2024-12-06 18:43:59.689168] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.960 18:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:07.502 00:32:07.502 real 0m14.600s 00:32:07.502 user 0m19.355s 00:32:07.502 sys 0m7.378s 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:07.502 ************************************ 00:32:07.502 END TEST nvmf_host_management 00:32:07.502 ************************************ 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:07.502 ************************************ 00:32:07.502 START TEST nvmf_lvol 00:32:07.502 ************************************ 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:07.502 * Looking for test storage... 00:32:07.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:07.502 18:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:07.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.502 --rc genhtml_branch_coverage=1 00:32:07.502 --rc genhtml_function_coverage=1 00:32:07.502 --rc genhtml_legend=1 00:32:07.502 --rc geninfo_all_blocks=1 00:32:07.502 --rc geninfo_unexecuted_blocks=1 00:32:07.502 00:32:07.502 ' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:07.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.502 --rc genhtml_branch_coverage=1 00:32:07.502 --rc genhtml_function_coverage=1 00:32:07.502 --rc genhtml_legend=1 00:32:07.502 --rc geninfo_all_blocks=1 00:32:07.502 --rc geninfo_unexecuted_blocks=1 00:32:07.502 00:32:07.502 ' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:07.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.502 --rc genhtml_branch_coverage=1 00:32:07.502 --rc genhtml_function_coverage=1 00:32:07.502 --rc genhtml_legend=1 00:32:07.502 --rc geninfo_all_blocks=1 00:32:07.502 --rc geninfo_unexecuted_blocks=1 00:32:07.502 00:32:07.502 ' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:07.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:07.502 --rc genhtml_branch_coverage=1 00:32:07.502 --rc genhtml_function_coverage=1 00:32:07.502 --rc genhtml_legend=1 00:32:07.502 --rc geninfo_all_blocks=1 00:32:07.502 --rc geninfo_unexecuted_blocks=1 00:32:07.502 00:32:07.502 ' 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.502 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:07.503 18:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:15.647 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:15.647 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:15.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:15.647 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:15.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:32:15.648 00:32:15.648 --- 10.0.0.2 ping statistics --- 00:32:15.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.648 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:32:15.648 00:32:15.648 --- 10.0.0.1 ping statistics --- 00:32:15.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.648 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2359077 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2359077 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2359077 ']' 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.648 18:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:15.648 [2024-12-06 18:44:09.634910] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.648 [2024-12-06 18:44:09.636077] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:15.648 [2024-12-06 18:44:09.636140] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.648 [2024-12-06 18:44:09.736334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:15.648 [2024-12-06 18:44:09.788030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.648 [2024-12-06 18:44:09.788090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.648 [2024-12-06 18:44:09.788099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.648 [2024-12-06 18:44:09.788106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.648 [2024-12-06 18:44:09.788113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.648 [2024-12-06 18:44:09.789962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.648 [2024-12-06 18:44:09.790125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.648 [2024-12-06 18:44:09.790127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.648 [2024-12-06 18:44:09.868252] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:15.648 [2024-12-06 18:44:09.869330] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:15.648 [2024-12-06 18:44:09.869726] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:15.648 [2024-12-06 18:44:09.869882] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:15.910 [2024-12-06 18:44:10.651064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.910 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:16.172 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:16.172 18:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:16.433 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:16.433 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:16.694 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:16.955 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c71783e2-eb4d-4f9d-b79c-d0e9abde4651 00:32:16.955 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c71783e2-eb4d-4f9d-b79c-d0e9abde4651 lvol 20 00:32:16.955 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=691b6c3a-0cb3-4670-ba64-d5a53a1ec9f9 00:32:16.955 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:17.216 18:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 691b6c3a-0cb3-4670-ba64-d5a53a1ec9f9 00:32:17.478 18:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:17.478 [2024-12-06 18:44:12.234933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:17.739 18:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:17.739 18:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2359485 00:32:17.739 18:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:17.739 18:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:19.128 18:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 691b6c3a-0cb3-4670-ba64-d5a53a1ec9f9 MY_SNAPSHOT 00:32:19.128 18:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0baf39bc-6793-46dd-9d36-8609cf323c15 00:32:19.128 18:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 691b6c3a-0cb3-4670-ba64-d5a53a1ec9f9 30 00:32:19.389 18:44:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0baf39bc-6793-46dd-9d36-8609cf323c15 MY_CLONE 00:32:19.649 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cdf0610e-75ba-4a17-8e19-6f7fd32708fc 00:32:19.649 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cdf0610e-75ba-4a17-8e19-6f7fd32708fc 00:32:19.910 18:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2359485 00:32:29.907 Initializing NVMe Controllers 00:32:29.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:29.907 Controller IO queue size 128, less than required. 00:32:29.907 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:29.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:29.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:29.907 Initialization complete. Launching workers. 00:32:29.907 ======================================================== 00:32:29.907 Latency(us) 00:32:29.907 Device Information : IOPS MiB/s Average min max 00:32:29.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15243.00 59.54 8397.68 2145.04 57122.25 00:32:29.907 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15695.70 61.31 8157.04 4178.89 57773.93 00:32:29.907 ======================================================== 00:32:29.907 Total : 30938.70 120.85 8275.60 2145.04 57773.93 00:32:29.907 00:32:29.907 18:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 691b6c3a-0cb3-4670-ba64-d5a53a1ec9f9 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c71783e2-eb4d-4f9d-b79c-d0e9abde4651 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:29.907 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.908 rmmod nvme_tcp 00:32:29.908 rmmod nvme_fabrics 00:32:29.908 rmmod nvme_keyring 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2359077 ']' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2359077 ']' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359077' 00:32:29.908 killing process with pid 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2359077 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.908 18:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.301 00:32:31.301 real 0m23.856s 00:32:31.301 user 0m56.147s 00:32:31.301 sys 0m10.694s 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:31.301 ************************************ 00:32:31.301 END TEST nvmf_lvol 00:32:31.301 ************************************ 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:31.301 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:31.302 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:31.302 ************************************ 00:32:31.302 START TEST nvmf_lvs_grow 00:32:31.302 ************************************ 00:32:31.302 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:31.302 * Looking for test storage... 00:32:31.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:31.302 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:31.302 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:32:31.302 18:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.302 --rc genhtml_branch_coverage=1 00:32:31.302 --rc genhtml_function_coverage=1 00:32:31.302 --rc genhtml_legend=1 00:32:31.302 --rc geninfo_all_blocks=1 00:32:31.302 --rc geninfo_unexecuted_blocks=1 00:32:31.302 00:32:31.302 ' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.302 --rc genhtml_branch_coverage=1 00:32:31.302 --rc genhtml_function_coverage=1 00:32:31.302 --rc genhtml_legend=1 00:32:31.302 --rc geninfo_all_blocks=1 00:32:31.302 --rc geninfo_unexecuted_blocks=1 00:32:31.302 00:32:31.302 ' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.302 --rc genhtml_branch_coverage=1 00:32:31.302 --rc genhtml_function_coverage=1 00:32:31.302 --rc genhtml_legend=1 00:32:31.302 --rc geninfo_all_blocks=1 00:32:31.302 --rc geninfo_unexecuted_blocks=1 00:32:31.302 00:32:31.302 ' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:31.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.302 --rc genhtml_branch_coverage=1 00:32:31.302 --rc genhtml_function_coverage=1 00:32:31.302 --rc genhtml_legend=1 00:32:31.302 --rc geninfo_all_blocks=1 00:32:31.302 --rc geninfo_unexecuted_blocks=1 00:32:31.302 00:32:31.302 ' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.302 18:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:39.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:39.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.443 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:39.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:39.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:32:39.444 00:32:39.444 --- 10.0.0.2 ping statistics --- 00:32:39.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.444 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:32:39.444 00:32:39.444 --- 10.0.0.1 ping statistics --- 00:32:39.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.444 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2365831 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2365831 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2365831 ']' 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.444 18:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.444 [2024-12-06 18:44:33.627066] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:39.444 [2024-12-06 18:44:33.628156] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:39.444 [2024-12-06 18:44:33.628204] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:39.444 [2024-12-06 18:44:33.729305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.444 [2024-12-06 18:44:33.779635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:39.444 [2024-12-06 18:44:33.779697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:39.444 [2024-12-06 18:44:33.779706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:39.444 [2024-12-06 18:44:33.779713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:39.444 [2024-12-06 18:44:33.779724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:39.444 [2024-12-06 18:44:33.780452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.444 [2024-12-06 18:44:33.861810] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:39.444 [2024-12-06 18:44:33.862105] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:39.707 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.707 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:39.707 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.707 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.707 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:39.970 [2024-12-06 18:44:34.661335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:39.970 ************************************ 00:32:39.970 START TEST lvs_grow_clean 00:32:39.970 ************************************ 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:39.970 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:40.231 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:40.231 18:44:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:40.492 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:40.492 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:40.492 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18b439c3-50d3-47a1-bda0-87ba2111f496 lvol 150 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=71d81cd7-c36d-431a-a15a-1c08ca615a27 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:40.752 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:41.012 [2024-12-06 18:44:35.697034] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:41.012 [2024-12-06 18:44:35.697205] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:41.012 true 00:32:41.012 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:41.012 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:41.272 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:41.272 18:44:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:41.533 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 71d81cd7-c36d-431a-a15a-1c08ca615a27 00:32:41.533 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:41.793 [2024-12-06 18:44:36.409714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:41.793 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2366473 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2366473 /var/tmp/bdevperf.sock 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2366473 ']' 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:42.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.053 18:44:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:42.053 [2024-12-06 18:44:36.650380] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:42.053 [2024-12-06 18:44:36.650453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2366473 ] 00:32:42.053 [2024-12-06 18:44:36.742225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.053 [2024-12-06 18:44:36.794398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.994 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.994 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:42.994 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:43.254 Nvme0n1 00:32:43.254 18:44:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:43.515 [ 00:32:43.515 { 00:32:43.515 "name": "Nvme0n1", 00:32:43.515 "aliases": [ 00:32:43.515 "71d81cd7-c36d-431a-a15a-1c08ca615a27" 00:32:43.515 ], 00:32:43.515 "product_name": "NVMe disk", 00:32:43.515 "block_size": 4096, 00:32:43.515 "num_blocks": 38912, 00:32:43.515 "uuid": "71d81cd7-c36d-431a-a15a-1c08ca615a27", 00:32:43.515 "numa_id": 0, 00:32:43.515 "assigned_rate_limits": { 00:32:43.515 "rw_ios_per_sec": 0, 00:32:43.515 "rw_mbytes_per_sec": 0, 00:32:43.515 "r_mbytes_per_sec": 0, 00:32:43.515 "w_mbytes_per_sec": 0 00:32:43.515 }, 00:32:43.515 "claimed": false, 00:32:43.515 "zoned": false, 00:32:43.515 "supported_io_types": { 00:32:43.515 "read": true, 00:32:43.515 "write": true, 00:32:43.515 "unmap": true, 00:32:43.515 "flush": true, 00:32:43.515 "reset": true, 00:32:43.515 "nvme_admin": true, 00:32:43.515 "nvme_io": true, 00:32:43.515 "nvme_io_md": false, 00:32:43.515 "write_zeroes": true, 00:32:43.515 "zcopy": false, 00:32:43.515 "get_zone_info": false, 00:32:43.515 "zone_management": false, 00:32:43.515 "zone_append": false, 00:32:43.515 "compare": true, 00:32:43.515 "compare_and_write": true, 00:32:43.515 "abort": true, 00:32:43.515 "seek_hole": false, 00:32:43.515 "seek_data": false, 00:32:43.515 "copy": true, 00:32:43.515 "nvme_iov_md": false 00:32:43.515 }, 00:32:43.515 "memory_domains": [ 00:32:43.515 { 00:32:43.515 "dma_device_id": "system", 00:32:43.515 "dma_device_type": 1 00:32:43.515 } 00:32:43.515 ], 00:32:43.515 "driver_specific": { 00:32:43.515 "nvme": [ 00:32:43.515 { 00:32:43.515 "trid": { 00:32:43.515 "trtype": "TCP", 00:32:43.515 "adrfam": "IPv4", 00:32:43.515 "traddr": "10.0.0.2", 00:32:43.515 "trsvcid": "4420", 00:32:43.515 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:43.515 }, 00:32:43.515 "ctrlr_data": { 00:32:43.515 "cntlid": 1, 00:32:43.515 "vendor_id": "0x8086", 00:32:43.515 "model_number": "SPDK bdev Controller", 00:32:43.515 "serial_number": "SPDK0", 00:32:43.515 "firmware_revision": "25.01", 00:32:43.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:43.515 "oacs": { 00:32:43.515 "security": 0, 00:32:43.515 "format": 0, 00:32:43.515 "firmware": 0, 00:32:43.515 "ns_manage": 0 00:32:43.515 }, 00:32:43.515 "multi_ctrlr": true, 00:32:43.515 "ana_reporting": false 00:32:43.515 }, 00:32:43.515 "vs": { 00:32:43.515 "nvme_version": "1.3" 00:32:43.515 }, 00:32:43.515 "ns_data": { 00:32:43.515 "id": 1, 00:32:43.515 "can_share": true 00:32:43.515 } 00:32:43.515 } 00:32:43.515 ], 00:32:43.515 "mp_policy": "active_passive" 00:32:43.515 } 00:32:43.515 } 00:32:43.515 ] 00:32:43.515 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2366646 00:32:43.515 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:43.515 18:44:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:43.515 Running I/O for 10 seconds... 00:32:44.457 Latency(us) 00:32:44.457 [2024-12-06T17:44:39.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.457 Nvme0n1 : 1.00 16774.00 65.52 0.00 0.00 0.00 0.00 0.00 00:32:44.457 [2024-12-06T17:44:39.241Z] =================================================================================================================== 00:32:44.457 [2024-12-06T17:44:39.241Z] Total : 16774.00 65.52 0.00 0.00 0.00 0.00 0.00 00:32:44.457 00:32:45.400 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:45.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:45.400 Nvme0n1 : 2.00 16959.50 66.25 0.00 0.00 0.00 0.00 0.00 00:32:45.400 [2024-12-06T17:44:40.184Z] =================================================================================================================== 00:32:45.400 [2024-12-06T17:44:40.184Z] Total : 16959.50 66.25 0.00 0.00 0.00 0.00 0.00 00:32:45.400 00:32:45.661 true 00:32:45.661 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:45.661 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:45.921 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:45.921 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:45.921 18:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2366646 00:32:46.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:46.492 Nvme0n1 : 3.00 17190.67 67.15 0.00 0.00 0.00 0.00 0.00 00:32:46.492 [2024-12-06T17:44:41.276Z] =================================================================================================================== 00:32:46.492 [2024-12-06T17:44:41.276Z] Total : 17190.67 67.15 0.00 0.00 0.00 0.00 0.00 00:32:46.492 00:32:47.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.435 Nvme0n1 : 4.00 17401.50 67.97 0.00 0.00 0.00 0.00 0.00 00:32:47.435 [2024-12-06T17:44:42.219Z] =================================================================================================================== 00:32:47.435 [2024-12-06T17:44:42.219Z] Total : 17401.50 67.97 0.00 0.00 0.00 0.00 0.00 00:32:47.435 00:32:48.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.820 Nvme0n1 : 5.00 18950.40 74.03 0.00 0.00 0.00 0.00 0.00 00:32:48.820 [2024-12-06T17:44:43.604Z] =================================================================================================================== 00:32:48.820 [2024-12-06T17:44:43.604Z] Total : 18950.40 74.03 0.00 0.00 0.00 0.00 0.00 00:32:48.820 00:32:49.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.392 Nvme0n1 : 6.00 20025.33 78.22 0.00 0.00 0.00 0.00 0.00 00:32:49.392 [2024-12-06T17:44:44.176Z] =================================================================================================================== 00:32:49.392 [2024-12-06T17:44:44.176Z] Total : 20025.33 78.22 0.00 0.00 0.00 0.00 0.00 00:32:49.392 00:32:50.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.793 Nvme0n1 : 7.00 20793.14 81.22 0.00 0.00 0.00 0.00 0.00 00:32:50.793 [2024-12-06T17:44:45.577Z] =================================================================================================================== 00:32:50.793 [2024-12-06T17:44:45.577Z] Total : 20793.14 81.22 0.00 0.00 0.00 0.00 0.00 00:32:50.793 00:32:51.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.734 Nvme0n1 : 8.00 21377.00 83.50 0.00 0.00 0.00 0.00 0.00 00:32:51.734 [2024-12-06T17:44:46.518Z] =================================================================================================================== 00:32:51.734 [2024-12-06T17:44:46.518Z] Total : 21377.00 83.50 0.00 0.00 0.00 0.00 0.00 00:32:51.734 00:32:52.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.686 Nvme0n1 : 9.00 21836.44 85.30 0.00 0.00 0.00 0.00 0.00 00:32:52.686 [2024-12-06T17:44:47.470Z] =================================================================================================================== 00:32:52.686 [2024-12-06T17:44:47.470Z] Total : 21836.44 85.30 0.00 0.00 0.00 0.00 0.00 00:32:52.686 00:32:53.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.677 Nvme0n1 : 10.00 22192.90 86.69 0.00 0.00 0.00 0.00 0.00 00:32:53.677 [2024-12-06T17:44:48.461Z] =================================================================================================================== 00:32:53.677 [2024-12-06T17:44:48.461Z] Total : 22192.90 86.69 0.00 0.00 0.00 0.00 0.00 00:32:53.677 00:32:53.677 00:32:53.677 Latency(us) 00:32:53.677 [2024-12-06T17:44:48.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.677 Nvme0n1 : 10.00 22195.46 86.70 0.00 0.00 5763.82 2894.51 29491.20 00:32:53.677 [2024-12-06T17:44:48.461Z] =================================================================================================================== 00:32:53.677 [2024-12-06T17:44:48.461Z] Total : 22195.46 86.70 0.00 0.00 5763.82 2894.51 29491.20 00:32:53.677 { 00:32:53.677 "results": [ 00:32:53.677 { 00:32:53.677 "job": "Nvme0n1", 00:32:53.677 "core_mask": "0x2", 00:32:53.677 "workload": "randwrite", 00:32:53.677 "status": "finished", 00:32:53.677 "queue_depth": 128, 00:32:53.677 "io_size": 4096, 00:32:53.677 "runtime": 10.004615, 00:32:53.677 "iops": 22195.45679668833, 00:32:53.677 "mibps": 86.70100311206379, 00:32:53.677 "io_failed": 0, 00:32:53.677 "io_timeout": 0, 00:32:53.677 "avg_latency_us": 5763.819992164174, 00:32:53.677 "min_latency_us": 2894.5066666666667, 00:32:53.677 "max_latency_us": 29491.2 00:32:53.677 } 00:32:53.677 ], 00:32:53.677 "core_count": 1 00:32:53.677 } 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2366473 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2366473 ']' 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2366473 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2366473 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2366473' 00:32:53.677 killing process with pid 2366473 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2366473 00:32:53.677 Received shutdown signal, test time was about 10.000000 seconds 00:32:53.677 00:32:53.677 Latency(us) 00:32:53.677 [2024-12-06T17:44:48.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.677 [2024-12-06T17:44:48.461Z] =================================================================================================================== 00:32:53.677 [2024-12-06T17:44:48.461Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2366473 00:32:53.677 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:53.938 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:54.199 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:54.199 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:54.199 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:54.199 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:54.199 18:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:54.460 [2024-12-06 18:44:49.093122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:54.460 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:54.461 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:54.722 request: 00:32:54.722 { 00:32:54.722 "uuid": "18b439c3-50d3-47a1-bda0-87ba2111f496", 00:32:54.722 "method": "bdev_lvol_get_lvstores", 00:32:54.722 "req_id": 1 00:32:54.722 } 00:32:54.722 Got JSON-RPC error response 00:32:54.722 response: 00:32:54.722 { 00:32:54.722 "code": -19, 00:32:54.722 "message": "No such device" 00:32:54.722 } 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:54.722 aio_bdev 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 71d81cd7-c36d-431a-a15a-1c08ca615a27 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=71d81cd7-c36d-431a-a15a-1c08ca615a27 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:54.722 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:54.984 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:54.984 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 71d81cd7-c36d-431a-a15a-1c08ca615a27 -t 2000 00:32:55.244 [ 00:32:55.244 { 00:32:55.244 "name": "71d81cd7-c36d-431a-a15a-1c08ca615a27", 00:32:55.244 "aliases": [ 00:32:55.244 "lvs/lvol" 00:32:55.244 ], 00:32:55.244 "product_name": "Logical Volume", 00:32:55.244 "block_size": 4096, 00:32:55.244 "num_blocks": 38912, 00:32:55.244 "uuid": "71d81cd7-c36d-431a-a15a-1c08ca615a27", 00:32:55.244 "assigned_rate_limits": { 00:32:55.244 "rw_ios_per_sec": 0, 00:32:55.244 "rw_mbytes_per_sec": 0, 00:32:55.244 "r_mbytes_per_sec": 0, 00:32:55.244 "w_mbytes_per_sec": 0 00:32:55.244 }, 00:32:55.244 "claimed": false, 00:32:55.244 "zoned": false, 00:32:55.244 "supported_io_types": { 00:32:55.244 "read": true, 00:32:55.244 "write": true, 00:32:55.244 "unmap": true, 00:32:55.244 "flush": false, 00:32:55.244 "reset": true, 00:32:55.244 "nvme_admin": false, 00:32:55.244 "nvme_io": false, 00:32:55.244 "nvme_io_md": false, 00:32:55.244 "write_zeroes": true, 00:32:55.244 "zcopy": false, 00:32:55.244 "get_zone_info": false, 00:32:55.244 "zone_management": false, 00:32:55.244 "zone_append": false, 00:32:55.244 "compare": false, 00:32:55.244 "compare_and_write": false, 00:32:55.244 "abort": false, 00:32:55.244 "seek_hole": true, 00:32:55.244 "seek_data": true, 00:32:55.244 "copy": false, 00:32:55.244 "nvme_iov_md": false 00:32:55.244 }, 00:32:55.244 "driver_specific": { 00:32:55.244 "lvol": { 00:32:55.244 "lvol_store_uuid": "18b439c3-50d3-47a1-bda0-87ba2111f496", 00:32:55.244 "base_bdev": "aio_bdev", 00:32:55.244 "thin_provision": false, 00:32:55.244 "num_allocated_clusters": 38, 00:32:55.244 "snapshot": false, 00:32:55.244 "clone": false, 00:32:55.244 "esnap_clone": false 00:32:55.244 } 00:32:55.244 } 00:32:55.244 } 00:32:55.244 ] 00:32:55.244 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:55.244 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:55.244 18:44:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:55.504 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:55.504 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:55.504 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:55.504 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:55.504 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 71d81cd7-c36d-431a-a15a-1c08ca615a27 00:32:55.764 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18b439c3-50d3-47a1-bda0-87ba2111f496 00:32:56.025 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:56.025 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:56.286 00:32:56.286 real 0m16.115s 00:32:56.286 user 0m15.794s 00:32:56.286 sys 0m1.431s 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:56.286 ************************************ 00:32:56.286 END TEST lvs_grow_clean 00:32:56.286 ************************************ 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:56.286 ************************************ 00:32:56.286 START TEST lvs_grow_dirty 00:32:56.286 ************************************ 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:56.286 18:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:56.547 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:56.547 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:56.828 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 lvol 150 00:32:57.088 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3637d5b-9549-43a4-a569-299aed000e22 00:32:57.088 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:57.088 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:57.088 [2024-12-06 18:44:51.833028] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:57.088 [2024-12-06 18:44:51.833200] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:57.088 true 00:32:57.088 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:32:57.088 18:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:57.348 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:57.348 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:57.609 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3637d5b-9549-43a4-a569-299aed000e22 00:32:57.609 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.869 [2024-12-06 18:44:52.517607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.869 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2369476 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2369476 /var/tmp/bdevperf.sock 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2369476 ']' 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:58.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:58.129 18:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:58.129 [2024-12-06 18:44:52.746631] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:32:58.129 [2024-12-06 18:44:52.746704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2369476 ] 00:32:58.129 [2024-12-06 18:44:52.835306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.129 [2024-12-06 18:44:52.878436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.070 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.070 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:59.070 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:59.070 Nvme0n1 00:32:59.070 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:59.332 [ 00:32:59.332 { 00:32:59.332 "name": "Nvme0n1", 00:32:59.332 "aliases": [ 00:32:59.332 "c3637d5b-9549-43a4-a569-299aed000e22" 00:32:59.332 ], 00:32:59.332 "product_name": "NVMe disk", 00:32:59.332 "block_size": 4096, 00:32:59.332 "num_blocks": 38912, 00:32:59.332 "uuid": "c3637d5b-9549-43a4-a569-299aed000e22", 00:32:59.332 "numa_id": 0, 00:32:59.332 "assigned_rate_limits": { 00:32:59.332 "rw_ios_per_sec": 0, 00:32:59.332 "rw_mbytes_per_sec": 0, 00:32:59.332 "r_mbytes_per_sec": 0, 00:32:59.332 "w_mbytes_per_sec": 0 00:32:59.332 }, 00:32:59.332 "claimed": false, 00:32:59.332 "zoned": false, 00:32:59.332 "supported_io_types": { 00:32:59.332 "read": true, 00:32:59.332 "write": true, 00:32:59.332 "unmap": true, 00:32:59.332 "flush": true, 00:32:59.332 "reset": true, 00:32:59.332 "nvme_admin": true, 00:32:59.332 "nvme_io": true, 00:32:59.332 "nvme_io_md": false, 00:32:59.332 "write_zeroes": true, 00:32:59.332 "zcopy": false, 00:32:59.332 "get_zone_info": false, 00:32:59.332 "zone_management": false, 00:32:59.332 "zone_append": false, 00:32:59.332 "compare": true, 00:32:59.332 "compare_and_write": true, 00:32:59.332 "abort": true, 00:32:59.332 "seek_hole": false, 00:32:59.332 "seek_data": false, 00:32:59.332 "copy": true, 00:32:59.332 "nvme_iov_md": false 00:32:59.332 }, 00:32:59.332 "memory_domains": [ 00:32:59.332 { 00:32:59.332 "dma_device_id": "system", 00:32:59.332 "dma_device_type": 1 00:32:59.332 } 00:32:59.332 ], 00:32:59.332 "driver_specific": { 00:32:59.332 "nvme": [ 00:32:59.332 { 00:32:59.332 "trid": { 00:32:59.332 "trtype": "TCP", 00:32:59.332 "adrfam": "IPv4", 00:32:59.332 "traddr": "10.0.0.2", 00:32:59.332 "trsvcid": "4420", 00:32:59.332 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:59.332 }, 00:32:59.332 "ctrlr_data": { 00:32:59.332 "cntlid": 1, 00:32:59.332 "vendor_id": "0x8086", 00:32:59.332 "model_number": "SPDK bdev Controller", 00:32:59.332 "serial_number": "SPDK0", 00:32:59.332 "firmware_revision": "25.01", 00:32:59.332 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.332 "oacs": { 00:32:59.332 "security": 0, 00:32:59.332 "format": 0, 00:32:59.332 "firmware": 0, 00:32:59.332 "ns_manage": 0 00:32:59.332 }, 00:32:59.332 "multi_ctrlr": true, 00:32:59.332 "ana_reporting": false 00:32:59.332 }, 00:32:59.332 "vs": { 00:32:59.332 "nvme_version": "1.3" 00:32:59.332 }, 00:32:59.332 "ns_data": { 00:32:59.332 "id": 1, 00:32:59.332 "can_share": true 00:32:59.332 } 00:32:59.332 } 00:32:59.332 ], 00:32:59.332 "mp_policy": "active_passive" 00:32:59.332 } 00:32:59.332 } 00:32:59.332 ] 00:32:59.332 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2369638 00:32:59.332 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:59.332 18:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:59.332 Running I/O for 10 seconds... 00:33:00.719 Latency(us) 00:33:00.719 [2024-12-06T17:44:55.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.719 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:00.719 [2024-12-06T17:44:55.503Z] =================================================================================================================== 00:33:00.719 [2024-12-06T17:44:55.503Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:00.719 00:33:01.289 18:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:01.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:01.289 Nvme0n1 : 2.00 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:33:01.289 [2024-12-06T17:44:56.073Z] =================================================================================================================== 00:33:01.289 [2024-12-06T17:44:56.073Z] Total : 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:33:01.289 00:33:01.551 true 00:33:01.551 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:01.551 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:01.812 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:01.812 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:01.812 18:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2369638 00:33:02.384 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.384 Nvme0n1 : 3.00 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:33:02.384 [2024-12-06T17:44:57.168Z] =================================================================================================================== 00:33:02.384 [2024-12-06T17:44:57.168Z] Total : 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:33:02.384 00:33:03.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.327 Nvme0n1 : 4.00 17304.00 67.59 0.00 0.00 0.00 0.00 0.00 00:33:03.327 [2024-12-06T17:44:58.111Z] =================================================================================================================== 00:33:03.327 [2024-12-06T17:44:58.111Z] Total : 17304.00 67.59 0.00 0.00 0.00 0.00 0.00 00:33:03.327 00:33:04.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.711 Nvme0n1 : 5.00 18466.00 72.13 0.00 0.00 0.00 0.00 0.00 00:33:04.711 [2024-12-06T17:44:59.495Z] =================================================================================================================== 00:33:04.711 [2024-12-06T17:44:59.495Z] Total : 18466.00 72.13 0.00 0.00 0.00 0.00 0.00 00:33:04.711 00:33:05.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.657 Nvme0n1 : 6.00 19632.67 76.69 0.00 0.00 0.00 0.00 0.00 00:33:05.657 [2024-12-06T17:45:00.441Z] =================================================================================================================== 00:33:05.657 [2024-12-06T17:45:00.441Z] Total : 19632.67 76.69 0.00 0.00 0.00 0.00 0.00 00:33:05.657 00:33:06.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.602 Nvme0n1 : 7.00 20463.57 79.94 0.00 0.00 0.00 0.00 0.00 00:33:06.602 [2024-12-06T17:45:01.386Z] =================================================================================================================== 00:33:06.602 [2024-12-06T17:45:01.386Z] Total : 20463.57 79.94 0.00 0.00 0.00 0.00 0.00 00:33:06.602 00:33:07.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.546 Nvme0n1 : 8.00 21094.62 82.40 0.00 0.00 0.00 0.00 0.00 00:33:07.546 [2024-12-06T17:45:02.330Z] =================================================================================================================== 00:33:07.546 [2024-12-06T17:45:02.330Z] Total : 21094.62 82.40 0.00 0.00 0.00 0.00 0.00 00:33:07.546 00:33:08.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.488 Nvme0n1 : 9.00 21587.11 84.32 0.00 0.00 0.00 0.00 0.00 00:33:08.488 [2024-12-06T17:45:03.272Z] =================================================================================================================== 00:33:08.488 [2024-12-06T17:45:03.272Z] Total : 21587.11 84.32 0.00 0.00 0.00 0.00 0.00 00:33:08.488 00:33:09.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.433 Nvme0n1 : 10.00 21982.80 85.87 0.00 0.00 0.00 0.00 0.00 00:33:09.433 [2024-12-06T17:45:04.217Z] =================================================================================================================== 00:33:09.433 [2024-12-06T17:45:04.217Z] Total : 21982.80 85.87 0.00 0.00 0.00 0.00 0.00 00:33:09.433 00:33:09.433 00:33:09.433 Latency(us) 00:33:09.433 [2024-12-06T17:45:04.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.433 Nvme0n1 : 10.00 21983.58 85.87 0.00 0.00 5819.26 2757.97 31238.83 00:33:09.434 [2024-12-06T17:45:04.218Z] =================================================================================================================== 00:33:09.434 [2024-12-06T17:45:04.218Z] Total : 21983.58 85.87 0.00 0.00 5819.26 2757.97 31238.83 00:33:09.434 { 00:33:09.434 "results": [ 00:33:09.434 { 00:33:09.434 "job": "Nvme0n1", 00:33:09.434 "core_mask": "0x2", 00:33:09.434 "workload": "randwrite", 00:33:09.434 "status": "finished", 00:33:09.434 "queue_depth": 128, 00:33:09.434 "io_size": 4096, 00:33:09.434 "runtime": 10.004693, 00:33:09.434 "iops": 21983.58310444908, 00:33:09.434 "mibps": 85.87337150175422, 00:33:09.434 "io_failed": 0, 00:33:09.434 "io_timeout": 0, 00:33:09.434 "avg_latency_us": 5819.257708607083, 00:33:09.434 "min_latency_us": 2757.9733333333334, 00:33:09.434 "max_latency_us": 31238.826666666668 00:33:09.434 } 00:33:09.434 ], 00:33:09.434 "core_count": 1 00:33:09.434 } 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2369476 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2369476 ']' 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2369476 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2369476 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2369476' 00:33:09.434 killing process with pid 2369476 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2369476 00:33:09.434 Received shutdown signal, test time was about 10.000000 seconds 00:33:09.434 00:33:09.434 Latency(us) 00:33:09.434 [2024-12-06T17:45:04.218Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.434 [2024-12-06T17:45:04.218Z] =================================================================================================================== 00:33:09.434 [2024-12-06T17:45:04.218Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:09.434 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2369476 00:33:09.695 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:09.695 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:09.955 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:09.955 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2365831 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2365831 00:33:10.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2365831 Killed "${NVMF_APP[@]}" "$@" 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2371772 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2371772 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2371772 ']' 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.217 18:45:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:10.217 [2024-12-06 18:45:04.925593] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:10.217 [2024-12-06 18:45:04.926594] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:10.217 [2024-12-06 18:45:04.926646] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.477 [2024-12-06 18:45:05.020044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.477 [2024-12-06 18:45:05.058308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.477 [2024-12-06 18:45:05.058354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.477 [2024-12-06 18:45:05.058360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.477 [2024-12-06 18:45:05.058364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.477 [2024-12-06 18:45:05.058368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.477 [2024-12-06 18:45:05.058913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.477 [2024-12-06 18:45:05.114539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:10.477 [2024-12-06 18:45:05.114751] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.060 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:11.320 [2024-12-06 18:45:05.929083] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:11.320 [2024-12-06 18:45:05.929349] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:11.320 [2024-12-06 18:45:05.929440] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c3637d5b-9549-43a4-a569-299aed000e22 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c3637d5b-9549-43a4-a569-299aed000e22 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:11.320 18:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:11.581 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3637d5b-9549-43a4-a569-299aed000e22 -t 2000 00:33:11.581 [ 00:33:11.581 { 00:33:11.581 "name": "c3637d5b-9549-43a4-a569-299aed000e22", 00:33:11.581 "aliases": [ 00:33:11.581 "lvs/lvol" 00:33:11.581 ], 00:33:11.581 "product_name": "Logical Volume", 00:33:11.581 "block_size": 4096, 00:33:11.581 "num_blocks": 38912, 00:33:11.581 "uuid": "c3637d5b-9549-43a4-a569-299aed000e22", 00:33:11.581 "assigned_rate_limits": { 00:33:11.581 "rw_ios_per_sec": 0, 00:33:11.581 "rw_mbytes_per_sec": 0, 00:33:11.581 "r_mbytes_per_sec": 0, 00:33:11.581 "w_mbytes_per_sec": 0 00:33:11.581 }, 00:33:11.581 "claimed": false, 00:33:11.581 "zoned": false, 00:33:11.581 "supported_io_types": { 00:33:11.581 "read": true, 00:33:11.581 "write": true, 00:33:11.581 "unmap": true, 00:33:11.581 "flush": false, 00:33:11.581 "reset": true, 00:33:11.581 "nvme_admin": false, 00:33:11.581 "nvme_io": false, 00:33:11.581 "nvme_io_md": false, 00:33:11.581 "write_zeroes": true, 00:33:11.581 "zcopy": false, 00:33:11.581 "get_zone_info": false, 00:33:11.581 "zone_management": false, 00:33:11.581 "zone_append": false, 00:33:11.581 "compare": false, 00:33:11.581 "compare_and_write": false, 00:33:11.581 "abort": false, 00:33:11.581 "seek_hole": true, 00:33:11.581 "seek_data": true, 00:33:11.581 "copy": false, 00:33:11.581 "nvme_iov_md": false 00:33:11.581 }, 00:33:11.581 "driver_specific": { 00:33:11.581 "lvol": { 00:33:11.581 "lvol_store_uuid": "3f2b39f0-ce7a-44d6-bf72-df3c53d92b27", 00:33:11.581 "base_bdev": "aio_bdev", 00:33:11.581 "thin_provision": false, 00:33:11.581 "num_allocated_clusters": 38, 00:33:11.581 "snapshot": false, 00:33:11.581 "clone": false, 00:33:11.581 "esnap_clone": false 00:33:11.581 } 00:33:11.581 } 00:33:11.581 } 00:33:11.581 ] 00:33:11.581 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:11.581 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:11.581 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:11.842 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:11.842 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:11.842 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:11.842 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:11.842 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:12.101 [2024-12-06 18:45:06.775466] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:12.101 18:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:12.362 request: 00:33:12.362 { 00:33:12.362 "uuid": "3f2b39f0-ce7a-44d6-bf72-df3c53d92b27", 00:33:12.362 "method": "bdev_lvol_get_lvstores", 00:33:12.362 "req_id": 1 00:33:12.362 } 00:33:12.362 Got JSON-RPC error response 00:33:12.362 response: 00:33:12.362 { 00:33:12.362 "code": -19, 00:33:12.362 "message": "No such device" 00:33:12.362 } 00:33:12.362 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:12.362 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:12.362 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:12.362 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:12.362 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:12.622 aio_bdev 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3637d5b-9549-43a4-a569-299aed000e22 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c3637d5b-9549-43a4-a569-299aed000e22 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:12.622 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3637d5b-9549-43a4-a569-299aed000e22 -t 2000 00:33:12.881 [ 00:33:12.881 { 00:33:12.881 "name": "c3637d5b-9549-43a4-a569-299aed000e22", 00:33:12.881 "aliases": [ 00:33:12.881 "lvs/lvol" 00:33:12.881 ], 00:33:12.881 "product_name": "Logical Volume", 00:33:12.881 "block_size": 4096, 00:33:12.881 "num_blocks": 38912, 00:33:12.881 "uuid": "c3637d5b-9549-43a4-a569-299aed000e22", 00:33:12.881 "assigned_rate_limits": { 00:33:12.881 "rw_ios_per_sec": 0, 00:33:12.881 "rw_mbytes_per_sec": 0, 00:33:12.881 "r_mbytes_per_sec": 0, 00:33:12.881 "w_mbytes_per_sec": 0 00:33:12.881 }, 00:33:12.881 "claimed": false, 00:33:12.881 "zoned": false, 00:33:12.881 "supported_io_types": { 00:33:12.881 "read": true, 00:33:12.881 "write": true, 00:33:12.881 "unmap": true, 00:33:12.881 "flush": false, 00:33:12.881 "reset": true, 00:33:12.881 "nvme_admin": false, 00:33:12.881 "nvme_io": false, 00:33:12.881 "nvme_io_md": false, 00:33:12.881 "write_zeroes": true, 00:33:12.881 "zcopy": false, 00:33:12.881 "get_zone_info": false, 00:33:12.881 "zone_management": false, 00:33:12.881 "zone_append": false, 00:33:12.881 "compare": false, 00:33:12.881 "compare_and_write": false, 00:33:12.881 "abort": false, 00:33:12.881 "seek_hole": true, 00:33:12.881 "seek_data": true, 00:33:12.881 "copy": false, 00:33:12.881 "nvme_iov_md": false 00:33:12.881 }, 00:33:12.881 "driver_specific": { 00:33:12.881 "lvol": { 00:33:12.881 "lvol_store_uuid": "3f2b39f0-ce7a-44d6-bf72-df3c53d92b27", 00:33:12.881 "base_bdev": "aio_bdev", 00:33:12.881 "thin_provision": false, 00:33:12.881 "num_allocated_clusters": 38, 00:33:12.881 "snapshot": false, 00:33:12.881 "clone": false, 00:33:12.881 "esnap_clone": false 00:33:12.881 } 00:33:12.881 } 00:33:12.881 } 00:33:12.881 ] 00:33:12.881 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:12.881 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:12.881 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:13.140 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:13.140 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:13.140 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:13.140 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:13.140 18:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3637d5b-9549-43a4-a569-299aed000e22 00:33:13.400 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f2b39f0-ce7a-44d6-bf72-df3c53d92b27 00:33:13.659 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:13.659 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:13.919 00:33:13.919 real 0m17.523s 00:33:13.919 user 0m35.328s 00:33:13.919 sys 0m3.250s 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:13.919 ************************************ 00:33:13.919 END TEST lvs_grow_dirty 00:33:13.919 ************************************ 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:13.919 nvmf_trace.0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:13.919 rmmod nvme_tcp 00:33:13.919 rmmod nvme_fabrics 00:33:13.919 rmmod nvme_keyring 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2371772 ']' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2371772 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2371772 ']' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2371772 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:13.919 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2371772 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2371772' 00:33:14.180 killing process with pid 2371772 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2371772 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2371772 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.180 18:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.730 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.730 00:33:16.730 real 0m45.106s 00:33:16.730 user 0m54.126s 00:33:16.730 sys 0m10.915s 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:16.731 ************************************ 00:33:16.731 END TEST nvmf_lvs_grow 00:33:16.731 ************************************ 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.731 18:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:16.731 ************************************ 00:33:16.731 START TEST nvmf_bdev_io_wait 00:33:16.731 ************************************ 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:16.731 * Looking for test storage... 00:33:16.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:16.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.731 --rc genhtml_branch_coverage=1 00:33:16.731 --rc genhtml_function_coverage=1 00:33:16.731 --rc genhtml_legend=1 00:33:16.731 --rc geninfo_all_blocks=1 00:33:16.731 --rc geninfo_unexecuted_blocks=1 00:33:16.731 00:33:16.731 ' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:16.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.731 --rc genhtml_branch_coverage=1 00:33:16.731 --rc genhtml_function_coverage=1 00:33:16.731 --rc genhtml_legend=1 00:33:16.731 --rc geninfo_all_blocks=1 00:33:16.731 --rc geninfo_unexecuted_blocks=1 00:33:16.731 00:33:16.731 ' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:16.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.731 --rc genhtml_branch_coverage=1 00:33:16.731 --rc genhtml_function_coverage=1 00:33:16.731 --rc genhtml_legend=1 00:33:16.731 --rc geninfo_all_blocks=1 00:33:16.731 --rc geninfo_unexecuted_blocks=1 00:33:16.731 00:33:16.731 ' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:16.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:16.731 --rc genhtml_branch_coverage=1 00:33:16.731 --rc genhtml_function_coverage=1 00:33:16.731 --rc genhtml_legend=1 00:33:16.731 --rc geninfo_all_blocks=1 00:33:16.731 --rc geninfo_unexecuted_blocks=1 00:33:16.731 00:33:16.731 ' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.731 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:16.732 18:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.876 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:24.877 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:24.877 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:24.877 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:24.877 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:24.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:24.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:33:24.877 00:33:24.877 --- 10.0.0.2 ping statistics --- 00:33:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.877 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:24.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:24.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:33:24.877 00:33:24.877 --- 10.0.0.1 ping statistics --- 00:33:24.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:24.877 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2377277 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2377277 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2377277 ']' 00:33:24.877 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:24.878 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.878 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:24.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:24.878 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.878 18:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.878 [2024-12-06 18:45:18.712352] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:24.878 [2024-12-06 18:45:18.713442] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:24.878 [2024-12-06 18:45:18.713490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:24.878 [2024-12-06 18:45:18.812625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:24.878 [2024-12-06 18:45:18.866260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:24.878 [2024-12-06 18:45:18.866321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:24.878 [2024-12-06 18:45:18.866330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:24.878 [2024-12-06 18:45:18.866337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:24.878 [2024-12-06 18:45:18.866343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:24.878 [2024-12-06 18:45:18.868389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.878 [2024-12-06 18:45:18.868549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:24.878 [2024-12-06 18:45:18.868710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:24.878 [2024-12-06 18:45:18.868712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.878 [2024-12-06 18:45:18.869494] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.878 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:24.878 [2024-12-06 18:45:19.657998] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:24.878 [2024-12-06 18:45:19.658699] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:24.878 [2024-12-06 18:45:19.658827] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:25.140 [2024-12-06 18:45:19.658960] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:25.140 [2024-12-06 18:45:19.669717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:25.140 Malloc0 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:25.140 [2024-12-06 18:45:19.742249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2377410 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2377413 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.140 { 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme$subsystem", 00:33:25.140 "trtype": "$TEST_TRANSPORT", 00:33:25.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "$NVMF_PORT", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.140 "hdgst": ${hdgst:-false}, 00:33:25.140 "ddgst": ${ddgst:-false} 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 } 00:33:25.140 EOF 00:33:25.140 )") 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2377416 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.140 { 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme$subsystem", 00:33:25.140 "trtype": "$TEST_TRANSPORT", 00:33:25.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "$NVMF_PORT", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.140 "hdgst": ${hdgst:-false}, 00:33:25.140 "ddgst": ${ddgst:-false} 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 } 00:33:25.140 EOF 00:33:25.140 )") 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2377419 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.140 { 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme$subsystem", 00:33:25.140 "trtype": "$TEST_TRANSPORT", 00:33:25.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "$NVMF_PORT", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.140 "hdgst": ${hdgst:-false}, 00:33:25.140 "ddgst": ${ddgst:-false} 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 } 00:33:25.140 EOF 00:33:25.140 )") 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:25.140 { 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme$subsystem", 00:33:25.140 "trtype": "$TEST_TRANSPORT", 00:33:25.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "$NVMF_PORT", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:25.140 "hdgst": ${hdgst:-false}, 00:33:25.140 "ddgst": ${ddgst:-false} 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 } 00:33:25.140 EOF 00:33:25.140 )") 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2377410 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme1", 00:33:25.140 "trtype": "tcp", 00:33:25.140 "traddr": "10.0.0.2", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "4420", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.140 "hdgst": false, 00:33:25.140 "ddgst": false 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 }' 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme1", 00:33:25.140 "trtype": "tcp", 00:33:25.140 "traddr": "10.0.0.2", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "4420", 00:33:25.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.140 "hdgst": false, 00:33:25.140 "ddgst": false 00:33:25.140 }, 00:33:25.140 "method": "bdev_nvme_attach_controller" 00:33:25.140 }' 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:25.140 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.140 "params": { 00:33:25.140 "name": "Nvme1", 00:33:25.140 "trtype": "tcp", 00:33:25.140 "traddr": "10.0.0.2", 00:33:25.140 "adrfam": "ipv4", 00:33:25.140 "trsvcid": "4420", 00:33:25.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.141 "hdgst": false, 00:33:25.141 "ddgst": false 00:33:25.141 }, 00:33:25.141 "method": "bdev_nvme_attach_controller" 00:33:25.141 }' 00:33:25.141 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:25.141 18:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:25.141 "params": { 00:33:25.141 "name": "Nvme1", 00:33:25.141 "trtype": "tcp", 00:33:25.141 "traddr": "10.0.0.2", 00:33:25.141 "adrfam": "ipv4", 00:33:25.141 "trsvcid": "4420", 00:33:25.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:25.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:25.141 "hdgst": false, 00:33:25.141 "ddgst": false 00:33:25.141 }, 00:33:25.141 "method": "bdev_nvme_attach_controller" 00:33:25.141 }' 00:33:25.141 [2024-12-06 18:45:19.800279] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:25.141 [2024-12-06 18:45:19.800353] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:25.141 [2024-12-06 18:45:19.803425] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:25.141 [2024-12-06 18:45:19.803486] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:25.141 [2024-12-06 18:45:19.811645] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:25.141 [2024-12-06 18:45:19.811717] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:25.141 [2024-12-06 18:45:19.816664] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:25.141 [2024-12-06 18:45:19.816745] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:25.401 [2024-12-06 18:45:20.013090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.402 [2024-12-06 18:45:20.056281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:25.402 [2024-12-06 18:45:20.106200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.402 [2024-12-06 18:45:20.146881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:25.402 [2024-12-06 18:45:20.175744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.663 [2024-12-06 18:45:20.213888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:25.663 [2024-12-06 18:45:20.249367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.663 [2024-12-06 18:45:20.288829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:25.663 Running I/O for 1 seconds... 00:33:25.663 Running I/O for 1 seconds... 00:33:25.663 Running I/O for 1 seconds... 00:33:25.924 Running I/O for 1 seconds... 00:33:26.868 180992.00 IOPS, 707.00 MiB/s 00:33:26.868 Latency(us) 00:33:26.868 [2024-12-06T17:45:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.868 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:26.868 Nvme1n1 : 1.00 180634.58 705.60 0.00 0.00 704.76 295.25 1966.08 00:33:26.868 [2024-12-06T17:45:21.652Z] =================================================================================================================== 00:33:26.868 [2024-12-06T17:45:21.652Z] Total : 180634.58 705.60 0.00 0.00 704.76 295.25 1966.08 00:33:26.868 6703.00 IOPS, 26.18 MiB/s [2024-12-06T17:45:21.652Z] 13199.00 IOPS, 51.56 MiB/s 00:33:26.868 Latency(us) 00:33:26.868 [2024-12-06T17:45:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.868 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:26.868 Nvme1n1 : 1.02 6706.72 26.20 0.00 0.00 18860.97 4669.44 30365.01 00:33:26.868 [2024-12-06T17:45:21.652Z] =================================================================================================================== 00:33:26.868 [2024-12-06T17:45:21.652Z] Total : 6706.72 26.20 0.00 0.00 18860.97 4669.44 30365.01 00:33:26.868 00:33:26.868 Latency(us) 00:33:26.868 [2024-12-06T17:45:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.868 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:26.868 Nvme1n1 : 1.01 13253.21 51.77 0.00 0.00 9625.52 2621.44 15400.96 00:33:26.868 [2024-12-06T17:45:21.652Z] =================================================================================================================== 00:33:26.868 [2024-12-06T17:45:21.652Z] Total : 13253.21 51.77 0.00 0.00 9625.52 2621.44 15400.96 00:33:26.868 6783.00 IOPS, 26.50 MiB/s 00:33:26.868 Latency(us) 00:33:26.868 [2024-12-06T17:45:21.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.868 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:26.868 Nvme1n1 : 1.01 6904.61 26.97 0.00 0.00 18483.14 4369.07 36263.25 00:33:26.868 [2024-12-06T17:45:21.652Z] =================================================================================================================== 00:33:26.868 [2024-12-06T17:45:21.652Z] Total : 6904.61 26.97 0.00 0.00 18483.14 4369.07 36263.25 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2377413 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2377416 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2377419 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:26.868 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:26.868 rmmod nvme_tcp 00:33:26.868 rmmod nvme_fabrics 00:33:27.129 rmmod nvme_keyring 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2377277 ']' 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2377277 ']' 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2377277' 00:33:27.129 killing process with pid 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2377277 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:27.129 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:27.130 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:27.391 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:27.391 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:27.391 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.391 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.391 18:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.307 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:29.307 00:33:29.307 real 0m12.981s 00:33:29.307 user 0m15.769s 00:33:29.307 sys 0m7.505s 00:33:29.307 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:29.307 18:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:29.307 ************************************ 00:33:29.307 END TEST nvmf_bdev_io_wait 00:33:29.307 ************************************ 00:33:29.307 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:29.307 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:29.307 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:29.307 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:29.307 ************************************ 00:33:29.307 START TEST nvmf_queue_depth 00:33:29.307 ************************************ 00:33:29.307 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:29.569 * Looking for test storage... 00:33:29.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:29.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.569 --rc genhtml_branch_coverage=1 00:33:29.569 --rc genhtml_function_coverage=1 00:33:29.569 --rc genhtml_legend=1 00:33:29.569 --rc geninfo_all_blocks=1 00:33:29.569 --rc geninfo_unexecuted_blocks=1 00:33:29.569 00:33:29.569 ' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:29.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.569 --rc genhtml_branch_coverage=1 00:33:29.569 --rc genhtml_function_coverage=1 00:33:29.569 --rc genhtml_legend=1 00:33:29.569 --rc geninfo_all_blocks=1 00:33:29.569 --rc geninfo_unexecuted_blocks=1 00:33:29.569 00:33:29.569 ' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:29.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.569 --rc genhtml_branch_coverage=1 00:33:29.569 --rc genhtml_function_coverage=1 00:33:29.569 --rc genhtml_legend=1 00:33:29.569 --rc geninfo_all_blocks=1 00:33:29.569 --rc geninfo_unexecuted_blocks=1 00:33:29.569 00:33:29.569 ' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:29.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:29.569 --rc genhtml_branch_coverage=1 00:33:29.569 --rc genhtml_function_coverage=1 00:33:29.569 --rc genhtml_legend=1 00:33:29.569 --rc geninfo_all_blocks=1 00:33:29.569 --rc geninfo_unexecuted_blocks=1 00:33:29.569 00:33:29.569 ' 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.569 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:29.570 18:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.716 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:37.716 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:37.716 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:37.716 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:37.716 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:37.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:37.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:37.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:37.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.717 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:37.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:33:37.718 00:33:37.718 --- 10.0.0.2 ping statistics --- 00:33:37.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.718 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:33:37.718 00:33:37.718 --- 10.0.0.1 ping statistics --- 00:33:37.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.718 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2381992 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2381992 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2381992 ']' 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:37.718 18:45:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.718 [2024-12-06 18:45:31.844921] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:37.718 [2024-12-06 18:45:31.846071] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:37.718 [2024-12-06 18:45:31.846123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.718 [2024-12-06 18:45:31.949336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.718 [2024-12-06 18:45:31.999263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.718 [2024-12-06 18:45:31.999316] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.718 [2024-12-06 18:45:31.999325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.718 [2024-12-06 18:45:31.999332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.718 [2024-12-06 18:45:31.999339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.718 [2024-12-06 18:45:32.000100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.718 [2024-12-06 18:45:32.077280] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:37.718 [2024-12-06 18:45:32.077557] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.993 [2024-12-06 18:45:32.705706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.993 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.994 Malloc0 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.994 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:38.262 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.262 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:38.263 [2024-12-06 18:45:32.789024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2382187 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2382187 /var/tmp/bdevperf.sock 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2382187 ']' 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:38.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.263 18:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:38.263 [2024-12-06 18:45:32.845111] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:33:38.263 [2024-12-06 18:45:32.845175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382187 ] 00:33:38.263 [2024-12-06 18:45:32.935590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.263 [2024-12-06 18:45:32.989331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:39.205 NVMe0n1 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.205 18:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:39.205 Running I/O for 10 seconds... 00:33:41.531 8194.00 IOPS, 32.01 MiB/s [2024-12-06T17:45:37.259Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-06T17:45:38.201Z] 9176.67 IOPS, 35.85 MiB/s [2024-12-06T17:45:39.144Z] 10148.00 IOPS, 39.64 MiB/s [2024-12-06T17:45:40.088Z] 10832.80 IOPS, 42.32 MiB/s [2024-12-06T17:45:41.108Z] 11246.50 IOPS, 43.93 MiB/s [2024-12-06T17:45:42.057Z] 11552.86 IOPS, 45.13 MiB/s [2024-12-06T17:45:42.999Z] 11800.12 IOPS, 46.09 MiB/s [2024-12-06T17:45:43.938Z] 12047.67 IOPS, 47.06 MiB/s [2024-12-06T17:45:44.199Z] 12187.30 IOPS, 47.61 MiB/s 00:33:49.415 Latency(us) 00:33:49.415 [2024-12-06T17:45:44.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.415 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:49.415 Verification LBA range: start 0x0 length 0x4000 00:33:49.415 NVMe0n1 : 10.05 12221.08 47.74 0.00 0.00 83508.00 20643.84 74274.13 00:33:49.415 [2024-12-06T17:45:44.199Z] =================================================================================================================== 00:33:49.415 [2024-12-06T17:45:44.199Z] Total : 12221.08 47.74 0.00 0.00 83508.00 20643.84 74274.13 00:33:49.415 { 00:33:49.415 "results": [ 00:33:49.415 { 00:33:49.415 "job": "NVMe0n1", 00:33:49.415 "core_mask": "0x1", 00:33:49.415 "workload": "verify", 00:33:49.415 "status": "finished", 00:33:49.415 "verify_range": { 00:33:49.415 "start": 0, 00:33:49.415 "length": 16384 00:33:49.415 }, 00:33:49.415 "queue_depth": 1024, 00:33:49.415 "io_size": 4096, 00:33:49.415 "runtime": 10.053778, 00:33:49.415 "iops": 12221.07748947709, 00:33:49.415 "mibps": 47.73858394326988, 00:33:49.415 "io_failed": 0, 00:33:49.415 "io_timeout": 0, 00:33:49.415 "avg_latency_us": 83507.99552213216, 00:33:49.415 "min_latency_us": 20643.84, 00:33:49.415 "max_latency_us": 74274.13333333333 00:33:49.415 } 00:33:49.415 ], 00:33:49.415 "core_count": 1 00:33:49.415 } 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2382187 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2382187 ']' 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2382187 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.415 18:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382187 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382187' 00:33:49.415 killing process with pid 2382187 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2382187 00:33:49.415 Received shutdown signal, test time was about 10.000000 seconds 00:33:49.415 00:33:49.415 Latency(us) 00:33:49.415 [2024-12-06T17:45:44.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:49.415 [2024-12-06T17:45:44.199Z] =================================================================================================================== 00:33:49.415 [2024-12-06T17:45:44.199Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2382187 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.415 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.415 rmmod nvme_tcp 00:33:49.415 rmmod nvme_fabrics 00:33:49.676 rmmod nvme_keyring 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2381992 ']' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2381992 ']' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381992' 00:33:49.677 killing process with pid 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2381992 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.677 18:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.219 00:33:52.219 real 0m22.414s 00:33:52.219 user 0m24.612s 00:33:52.219 sys 0m7.471s 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:52.219 ************************************ 00:33:52.219 END TEST nvmf_queue_depth 00:33:52.219 ************************************ 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:52.219 ************************************ 00:33:52.219 START TEST nvmf_target_multipath 00:33:52.219 ************************************ 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:52.219 * Looking for test storage... 00:33:52.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.219 --rc genhtml_branch_coverage=1 00:33:52.219 --rc genhtml_function_coverage=1 00:33:52.219 --rc genhtml_legend=1 00:33:52.219 --rc geninfo_all_blocks=1 00:33:52.219 --rc geninfo_unexecuted_blocks=1 00:33:52.219 00:33:52.219 ' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.219 --rc genhtml_branch_coverage=1 00:33:52.219 --rc genhtml_function_coverage=1 00:33:52.219 --rc genhtml_legend=1 00:33:52.219 --rc geninfo_all_blocks=1 00:33:52.219 --rc geninfo_unexecuted_blocks=1 00:33:52.219 00:33:52.219 ' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.219 --rc genhtml_branch_coverage=1 00:33:52.219 --rc genhtml_function_coverage=1 00:33:52.219 --rc genhtml_legend=1 00:33:52.219 --rc geninfo_all_blocks=1 00:33:52.219 --rc geninfo_unexecuted_blocks=1 00:33:52.219 00:33:52.219 ' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:52.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.219 --rc genhtml_branch_coverage=1 00:33:52.219 --rc genhtml_function_coverage=1 00:33:52.219 --rc genhtml_legend=1 00:33:52.219 --rc geninfo_all_blocks=1 00:33:52.219 --rc geninfo_unexecuted_blocks=1 00:33:52.219 00:33:52.219 ' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.219 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.220 18:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:00.350 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.350 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:00.351 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:00.351 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:00.351 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:34:00.351 00:34:00.351 --- 10.0.0.2 ping statistics --- 00:34:00.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.351 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:34:00.351 00:34:00.351 --- 10.0.0.1 ping statistics --- 00:34:00.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.351 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.351 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.352 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.352 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.352 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.352 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.352 18:45:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:00.352 only one NIC for nvmf test 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:00.352 rmmod nvme_tcp 00:34:00.352 rmmod nvme_fabrics 00:34:00.352 rmmod nvme_keyring 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:00.352 18:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.734 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.735 00:34:01.735 real 0m9.675s 00:34:01.735 user 0m2.110s 00:34:01.735 sys 0m5.522s 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:01.735 ************************************ 00:34:01.735 END TEST nvmf_target_multipath 00:34:01.735 ************************************ 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:01.735 ************************************ 00:34:01.735 START TEST nvmf_zcopy 00:34:01.735 ************************************ 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:01.735 * Looking for test storage... 00:34:01.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:34:01.735 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:01.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.996 --rc genhtml_branch_coverage=1 00:34:01.996 --rc genhtml_function_coverage=1 00:34:01.996 --rc genhtml_legend=1 00:34:01.996 --rc geninfo_all_blocks=1 00:34:01.996 --rc geninfo_unexecuted_blocks=1 00:34:01.996 00:34:01.996 ' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:01.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.996 --rc genhtml_branch_coverage=1 00:34:01.996 --rc genhtml_function_coverage=1 00:34:01.996 --rc genhtml_legend=1 00:34:01.996 --rc geninfo_all_blocks=1 00:34:01.996 --rc geninfo_unexecuted_blocks=1 00:34:01.996 00:34:01.996 ' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:01.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.996 --rc genhtml_branch_coverage=1 00:34:01.996 --rc genhtml_function_coverage=1 00:34:01.996 --rc genhtml_legend=1 00:34:01.996 --rc geninfo_all_blocks=1 00:34:01.996 --rc geninfo_unexecuted_blocks=1 00:34:01.996 00:34:01.996 ' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:01.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.996 --rc genhtml_branch_coverage=1 00:34:01.996 --rc genhtml_function_coverage=1 00:34:01.996 --rc genhtml_legend=1 00:34:01.996 --rc geninfo_all_blocks=1 00:34:01.996 --rc geninfo_unexecuted_blocks=1 00:34:01.996 00:34:01.996 ' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.996 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.997 18:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:10.137 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:10.137 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:10.137 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:10.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:10.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:10.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:34:10.138 00:34:10.138 --- 10.0.0.2 ping statistics --- 00:34:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.138 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:34:10.138 00:34:10.138 --- 10.0.0.1 ping statistics --- 00:34:10.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.138 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2392675 00:34:10.138 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2392675 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2392675 ']' 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.139 18:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 [2024-12-06 18:46:03.935501] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:10.139 [2024-12-06 18:46:03.936597] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:34:10.139 [2024-12-06 18:46:03.936655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.139 [2024-12-06 18:46:04.037641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.139 [2024-12-06 18:46:04.087567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.139 [2024-12-06 18:46:04.087620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.139 [2024-12-06 18:46:04.087629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.139 [2024-12-06 18:46:04.087636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.139 [2024-12-06 18:46:04.087653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.139 [2024-12-06 18:46:04.088405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.139 [2024-12-06 18:46:04.166711] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:10.139 [2024-12-06 18:46:04.166998] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 [2024-12-06 18:46:04.801268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 [2024-12-06 18:46:04.829565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 malloc0 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.139 { 00:34:10.139 "params": { 00:34:10.139 "name": "Nvme$subsystem", 00:34:10.139 "trtype": "$TEST_TRANSPORT", 00:34:10.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.139 "adrfam": "ipv4", 00:34:10.139 "trsvcid": "$NVMF_PORT", 00:34:10.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.139 "hdgst": ${hdgst:-false}, 00:34:10.139 "ddgst": ${ddgst:-false} 00:34:10.139 }, 00:34:10.139 "method": "bdev_nvme_attach_controller" 00:34:10.139 } 00:34:10.139 EOF 00:34:10.139 )") 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:10.139 18:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:10.139 "params": { 00:34:10.139 "name": "Nvme1", 00:34:10.139 "trtype": "tcp", 00:34:10.139 "traddr": "10.0.0.2", 00:34:10.139 "adrfam": "ipv4", 00:34:10.139 "trsvcid": "4420", 00:34:10.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:10.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:10.139 "hdgst": false, 00:34:10.139 "ddgst": false 00:34:10.139 }, 00:34:10.139 "method": "bdev_nvme_attach_controller" 00:34:10.139 }' 00:34:10.415 [2024-12-06 18:46:04.933039] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:34:10.415 [2024-12-06 18:46:04.933114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392710 ] 00:34:10.415 [2024-12-06 18:46:05.026812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.415 [2024-12-06 18:46:05.080696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.676 Running I/O for 10 seconds... 00:34:13.002 6456.00 IOPS, 50.44 MiB/s [2024-12-06T17:46:08.728Z] 6504.50 IOPS, 50.82 MiB/s [2024-12-06T17:46:09.672Z] 6523.33 IOPS, 50.96 MiB/s [2024-12-06T17:46:10.615Z] 6618.00 IOPS, 51.70 MiB/s [2024-12-06T17:46:11.556Z] 7208.60 IOPS, 56.32 MiB/s [2024-12-06T17:46:12.496Z] 7621.83 IOPS, 59.55 MiB/s [2024-12-06T17:46:13.434Z] 7924.14 IOPS, 61.91 MiB/s [2024-12-06T17:46:14.816Z] 8149.12 IOPS, 63.67 MiB/s [2024-12-06T17:46:15.755Z] 8323.78 IOPS, 65.03 MiB/s [2024-12-06T17:46:15.755Z] 8463.20 IOPS, 66.12 MiB/s 00:34:20.971 Latency(us) 00:34:20.971 [2024-12-06T17:46:15.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.971 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:20.971 Verification LBA range: start 0x0 length 0x1000 00:34:20.971 Nvme1n1 : 10.01 8463.62 66.12 0.00 0.00 15077.42 662.19 27088.21 00:34:20.971 [2024-12-06T17:46:15.755Z] =================================================================================================================== 00:34:20.971 [2024-12-06T17:46:15.755Z] Total : 8463.62 66.12 0.00 0.00 15077.42 662.19 27088.21 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2394709 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:20.971 { 00:34:20.971 "params": { 00:34:20.971 "name": "Nvme$subsystem", 00:34:20.971 "trtype": "$TEST_TRANSPORT", 00:34:20.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:20.971 "adrfam": "ipv4", 00:34:20.971 "trsvcid": "$NVMF_PORT", 00:34:20.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:20.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:20.971 "hdgst": ${hdgst:-false}, 00:34:20.971 "ddgst": ${ddgst:-false} 00:34:20.971 }, 00:34:20.971 "method": "bdev_nvme_attach_controller" 00:34:20.971 } 00:34:20.971 EOF 00:34:20.971 )") 00:34:20.971 [2024-12-06 18:46:15.508838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.971 [2024-12-06 18:46:15.508871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.971 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:20.972 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:20.972 [2024-12-06 18:46:15.516801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.516810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:20.972 18:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:20.972 "params": { 00:34:20.972 "name": "Nvme1", 00:34:20.972 "trtype": "tcp", 00:34:20.972 "traddr": "10.0.0.2", 00:34:20.972 "adrfam": "ipv4", 00:34:20.972 "trsvcid": "4420", 00:34:20.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:20.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:20.972 "hdgst": false, 00:34:20.972 "ddgst": false 00:34:20.972 }, 00:34:20.972 "method": "bdev_nvme_attach_controller" 00:34:20.972 }' 00:34:20.972 [2024-12-06 18:46:15.524797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.524805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.532798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.532805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.544797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.544805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.553795] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:34:20.972 [2024-12-06 18:46:15.553841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394709 ] 00:34:20.972 [2024-12-06 18:46:15.556797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.556805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.568797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.568804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.580797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.580804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.588797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.588804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.596796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.596803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.604798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.604805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.612797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.612804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.624797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.624804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.633869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.972 [2024-12-06 18:46:15.636799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.636809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.648798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.648807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.660796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.660806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.663075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.972 [2024-12-06 18:46:15.672797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.672805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.684803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.684816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.696799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.696813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.708798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.708808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.720797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.720805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.732809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.732825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.972 [2024-12-06 18:46:15.744800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.972 [2024-12-06 18:46:15.744810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.231 [2024-12-06 18:46:15.756801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.756812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.768798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.768808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.780798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.780807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.792805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.792820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 Running I/O for 5 seconds... 00:34:21.232 [2024-12-06 18:46:15.806921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.806937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.820825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.820841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.834112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.834127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.848053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.848069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.861237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.861252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.876503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.876519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.889779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.889793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.903610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.903625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.916703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.916718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.929758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.929773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.943920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.943934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.957017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.957033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.969749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.969764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.983611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.983626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:15.996851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:15.996870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.232 [2024-12-06 18:46:16.009580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.232 [2024-12-06 18:46:16.009594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.023479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.023494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.036564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.036579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.049785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.049799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.064039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.064054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.077160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.077174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.092313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.092328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.105212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.105226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.119569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.119585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.132784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.132799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.145675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.145690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.160014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.160029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.173154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.173168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.188081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.188096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.201038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.201053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.213851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.213865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.228235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.228250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.241429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.241443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.256377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.256397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.492 [2024-12-06 18:46:16.269492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.492 [2024-12-06 18:46:16.269506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.283841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.283856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.296900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.296915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.309818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.309832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.324023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.324038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.337136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.337150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.351784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.351799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.365182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.365196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.379835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.379850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.392675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.392690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.405319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.405333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.419807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.419822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.432733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.432749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.445978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.445992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.460294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.460309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.473385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.473400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.487735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.487751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.500703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.500718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.513572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.513589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.752 [2024-12-06 18:46:16.527855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.752 [2024-12-06 18:46:16.527870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.540852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.540868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.553718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.553732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.567779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.567793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.581032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.581046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.593947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.593961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.607980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.607995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.620829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.620843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.633399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.633412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.647883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.647898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.661069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.661082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.675673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.675688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.688680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.688694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.701321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.701334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.715765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.715779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.728431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.728446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.741813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.741827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.755864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.755878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.768861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.768875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.781090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.781103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.013 [2024-12-06 18:46:16.795666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.013 [2024-12-06 18:46:16.795680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 19116.00 IOPS, 149.34 MiB/s [2024-12-06T17:46:17.058Z] [2024-12-06 18:46:16.808855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.808870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.821769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.821784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.836135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.836149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.849085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.849098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.863785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.863799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.876662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.876676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.890060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.890074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.904016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.904030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.917117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.917131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.931882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.931897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.944876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.944890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.958351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.958366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.972055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.972069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.985139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.985153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:16.999460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:16.999474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:17.012476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:17.012491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:17.025789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:17.025803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:17.039783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:17.039798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.274 [2024-12-06 18:46:17.052679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.274 [2024-12-06 18:46:17.052694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.065967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.065982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.079730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.079744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.092566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.092581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.105894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.105907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.119973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.119988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.133164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.133177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.148222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.148236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.161071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.161085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.175252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.175266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.188091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.188106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.200920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.200935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.213675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.213689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.227929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.227944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.241131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.241145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.255757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.255771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.269092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.269105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.283441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.283455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.296513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.296527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 [2024-12-06 18:46:17.309211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-12-06 18:46:17.309224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.324151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.324166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.337500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.337514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.351881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.351895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.364800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.364814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.378117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.378131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.391819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.391833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.404720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.404735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.417862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.417876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.432126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.432140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.444960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.444975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.457320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.457334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.472017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.472032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.485078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.485092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.499957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-12-06 18:46:17.499972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 [2024-12-06 18:46:17.512971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-12-06 18:46:17.512985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 [2024-12-06 18:46:17.525648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-12-06 18:46:17.525666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 [2024-12-06 18:46:17.539619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-12-06 18:46:17.539634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 [2024-12-06 18:46:17.552504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-12-06 18:46:17.552519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 [2024-12-06 18:46:17.565233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-12-06 18:46:17.565247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.580219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.580235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.593340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.593354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.607563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.607578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.620647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.620662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.634119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.634133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.648207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.648221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.661228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.661241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.675647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.675662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.688602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.688617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.701946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.701960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.715982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.715997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.729161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.729175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.744166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.744181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.757106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.757120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.771935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.771950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.784947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.784966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.797563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.797577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 19134.00 IOPS, 149.48 MiB/s [2024-12-06T17:46:17.839Z] [2024-12-06 18:46:17.812465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.812480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.055 [2024-12-06 18:46:17.825715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.055 [2024-12-06 18:46:17.825729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.315 [2024-12-06 18:46:17.840553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.315 [2024-12-06 18:46:17.840568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.315 [2024-12-06 18:46:17.853494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.315 [2024-12-06 18:46:17.853508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.315 [2024-12-06 18:46:17.868219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.315 [2024-12-06 18:46:17.868233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.315 [2024-12-06 18:46:17.880950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.315 [2024-12-06 18:46:17.880964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.893759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.893773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.908531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.908545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.921456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.921470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.935740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.935754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.949243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.949257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.964723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.964737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.977720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.977734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:17.991799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:17.991813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.004721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.004736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.017644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.017659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.031942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.031956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.044837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.044856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.057840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.057854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.072610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.072625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.316 [2024-12-06 18:46:18.085411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.316 [2024-12-06 18:46:18.085424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.099909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.099924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.113209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.113223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.127607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.127621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.140544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.140559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.153709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.153724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.168016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.168031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.180784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.180799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.193501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.193516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.207799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.207815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.220838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.220852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.233377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.233391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.248442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.248456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.261394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.261408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.275767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.275782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.288431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.288445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.301198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.301212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.314250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.314264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.327870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.327884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.340945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.340960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.577 [2024-12-06 18:46:18.352989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.577 [2024-12-06 18:46:18.353003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.366215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.366229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.380066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.380080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.392990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.393004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.405885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.405899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.419968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.419982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.433049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.433062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.447482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.447497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.460492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.460506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.473628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.473646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.488077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.488091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.501422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.501436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.515734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.515749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.528847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.528861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.541711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.541724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.555870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.555885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.569013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.569027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.581376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.581390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.596115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.596129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.837 [2024-12-06 18:46:18.609111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.837 [2024-12-06 18:46:18.609124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.623682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.623697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.636601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.636616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.649310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.649324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.663905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.663920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.676708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.676722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.689537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.689551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.703716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.703731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.716680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.716694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.729472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.729486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.743888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.743903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.756904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.756918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.769836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.769850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.783979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.783993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.797027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.797041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 19147.00 IOPS, 149.59 MiB/s [2024-12-06T17:46:18.882Z] [2024-12-06 18:46:18.809978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.809991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.824245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.824259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.837197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.837210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.851981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.851995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.864868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.864882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.098 [2024-12-06 18:46:18.877669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.098 [2024-12-06 18:46:18.877683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.891913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.891928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.905365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.905379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.919720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.919734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.932891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.932907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.945884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.945898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.960240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.960255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.973612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.973627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:18.987856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:18.987871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.000746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.000761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.014063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.014077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.027952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.027966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.040787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.040801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.053360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.053378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.067948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.067963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.081085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.081099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.095460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.095474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.108725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.108741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.121740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.121753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.358 [2024-12-06 18:46:19.136275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.358 [2024-12-06 18:46:19.136290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.618 [2024-12-06 18:46:19.149732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.618 [2024-12-06 18:46:19.149747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.618 [2024-12-06 18:46:19.163602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.163617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.176757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.176771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.190222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.190236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.203791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.203805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.216706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.216721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.229323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.229337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.244095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.244110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.257035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.257049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.269867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.269881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.283878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.283893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.296676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.296691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.309230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.309249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.324123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.324138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.337166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.337181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.350133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.350148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.363705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.363720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.376891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.376905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.619 [2024-12-06 18:46:19.389357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.619 [2024-12-06 18:46:19.389371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.403962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.403977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.416977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.416991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.429546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.429561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.443712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.443727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.456410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.456425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.469896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.469911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.483978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.483992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.497018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.497033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.509813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.509827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.524376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.524391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.537110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.537124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.551404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.551418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.564433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.564452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.577482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.577496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.592035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.592050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.604726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.604740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.617261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.617275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.632327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.632342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.645239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.645253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.879 [2024-12-06 18:46:19.660534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.879 [2024-12-06 18:46:19.660549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.673669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.673683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.688160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.688175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.701468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.701482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.715574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.715589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.728597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.728611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.741074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.741088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.753847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.753862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.768172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.768186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.781238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.781252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.795362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.795377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.808508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.808523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 19143.25 IOPS, 149.56 MiB/s [2024-12-06T17:46:19.924Z] [2024-12-06 18:46:19.821237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.821251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.836121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.836136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.849025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.849040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.861644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.861658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.875707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.875721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.888923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.888937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.901910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.901925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.140 [2024-12-06 18:46:19.916000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.140 [2024-12-06 18:46:19.916015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.929245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.929261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.943687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.943701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.956317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.956332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.969435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.969449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.983587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.983601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:19.996634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:19.996653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.008744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.008764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.022136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.022156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.035997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.036014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.050318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.050337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.064227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.064243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.077897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.077912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.091788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.091804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.104981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.104996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.118287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.118302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.132441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.132456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.145431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.145446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.159983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.159998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.400 [2024-12-06 18:46:20.172909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.400 [2024-12-06 18:46:20.172924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.185873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.185887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.197828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.197842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.211717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.211731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.224761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.224776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.237674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.237688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.252012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.252027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.264994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.265009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.277850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.277865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.660 [2024-12-06 18:46:20.292231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.660 [2024-12-06 18:46:20.292246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.305187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.305200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.319948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.319962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.332939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.332954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.345809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.345823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.359887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.359901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.372739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.372754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.386237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.386252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.400322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.400336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.413548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.413562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.427718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.427732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.661 [2024-12-06 18:46:20.440846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.661 [2024-12-06 18:46:20.440860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.453496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.453510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.468075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.468089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.481622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.481640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.496223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.496238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.509005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.509019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.521854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.521868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.535912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.535926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.549075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.549089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.563775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.563789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.576530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.576548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.589420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.589434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.604249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.604264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.616979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.616993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.629774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.629787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.644086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.644100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.657109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.657123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.671952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.671966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.685151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.685164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.921 [2024-12-06 18:46:20.699643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.921 [2024-12-06 18:46:20.699658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.712817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.712832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.725669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.725683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.739631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.739649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.752779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.752794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.765686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.765699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.779803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.779817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.793067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.793081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.807620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.807634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 19105.00 IOPS, 149.26 MiB/s 00:34:26.182 Latency(us) 00:34:26.182 [2024-12-06T17:46:20.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.182 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:26.182 Nvme1n1 : 5.00 19115.79 149.34 0.00 0.00 6691.11 2785.28 11960.32 00:34:26.182 [2024-12-06T17:46:20.966Z] =================================================================================================================== 00:34:26.182 [2024-12-06T17:46:20.966Z] Total : 19115.79 149.34 0.00 0.00 6691.11 2785.28 11960.32 00:34:26.182 [2024-12-06 18:46:20.816805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.816819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.828801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.828813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.840804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.840816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.852804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.852817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.864802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.864812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.876798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.876808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.888797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.888806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.900800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.900810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 [2024-12-06 18:46:20.912797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.182 [2024-12-06 18:46:20.912805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2394709) - No such process 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2394709 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:26.182 delay0 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.182 18:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:26.443 [2024-12-06 18:46:21.041901] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:33.022 [2024-12-06 18:46:27.498571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bea0f0 is same with the state(6) to be set 00:34:33.022 [2024-12-06 18:46:27.498608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bea0f0 is same with the state(6) to be set 00:34:33.022 Initializing NVMe Controllers 00:34:33.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:33.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:33.022 Initialization complete. Launching workers. 00:34:33.022 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 9690 00:34:33.022 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9913, failed to submit 67 00:34:33.022 success 9790, unsuccessful 123, failed 0 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.022 rmmod nvme_tcp 00:34:33.022 rmmod nvme_fabrics 00:34:33.022 rmmod nvme_keyring 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2392675 ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2392675 ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392675' 00:34:33.022 killing process with pid 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2392675 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.022 18:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.572 00:34:35.572 real 0m33.487s 00:34:35.572 user 0m42.743s 00:34:35.572 sys 0m12.110s 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.572 ************************************ 00:34:35.572 END TEST nvmf_zcopy 00:34:35.572 ************************************ 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.572 ************************************ 00:34:35.572 START TEST nvmf_nmic 00:34:35.572 ************************************ 00:34:35.572 18:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:35.572 * Looking for test storage... 00:34:35.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.572 --rc genhtml_branch_coverage=1 00:34:35.572 --rc genhtml_function_coverage=1 00:34:35.572 --rc genhtml_legend=1 00:34:35.572 --rc geninfo_all_blocks=1 00:34:35.572 --rc geninfo_unexecuted_blocks=1 00:34:35.572 00:34:35.572 ' 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.572 --rc genhtml_branch_coverage=1 00:34:35.572 --rc genhtml_function_coverage=1 00:34:35.572 --rc genhtml_legend=1 00:34:35.572 --rc geninfo_all_blocks=1 00:34:35.572 --rc geninfo_unexecuted_blocks=1 00:34:35.572 00:34:35.572 ' 00:34:35.572 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.572 --rc genhtml_branch_coverage=1 00:34:35.572 --rc genhtml_function_coverage=1 00:34:35.572 --rc genhtml_legend=1 00:34:35.572 --rc geninfo_all_blocks=1 00:34:35.573 --rc geninfo_unexecuted_blocks=1 00:34:35.573 00:34:35.573 ' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:35.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:35.573 --rc genhtml_branch_coverage=1 00:34:35.573 --rc genhtml_function_coverage=1 00:34:35.573 --rc genhtml_legend=1 00:34:35.573 --rc geninfo_all_blocks=1 00:34:35.573 --rc geninfo_unexecuted_blocks=1 00:34:35.573 00:34:35.573 ' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:35.573 18:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:43.723 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:43.723 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:43.723 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:43.723 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.723 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:43.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:34:43.724 00:34:43.724 --- 10.0.0.2 ping statistics --- 00:34:43.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.724 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:34:43.724 00:34:43.724 --- 10.0.0.1 ping statistics --- 00:34:43.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.724 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2401109 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2401109 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2401109 ']' 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:43.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:43.724 18:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.724 [2024-12-06 18:46:37.651877] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:43.724 [2024-12-06 18:46:37.653023] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:34:43.724 [2024-12-06 18:46:37.653074] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:43.724 [2024-12-06 18:46:37.752057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:43.724 [2024-12-06 18:46:37.808092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:43.724 [2024-12-06 18:46:37.808146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:43.724 [2024-12-06 18:46:37.808155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:43.724 [2024-12-06 18:46:37.808163] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:43.724 [2024-12-06 18:46:37.808169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:43.724 [2024-12-06 18:46:37.810163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:43.724 [2024-12-06 18:46:37.810319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:43.724 [2024-12-06 18:46:37.810481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:43.724 [2024-12-06 18:46:37.810481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:43.724 [2024-12-06 18:46:37.889899] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:43.724 [2024-12-06 18:46:37.890791] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:43.724 [2024-12-06 18:46:37.891470] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:43.724 [2024-12-06 18:46:37.891527] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:43.724 [2024-12-06 18:46:37.891587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:43.724 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:43.724 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:43.724 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:43.724 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:43.724 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 [2024-12-06 18:46:38.515466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 Malloc0 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 [2024-12-06 18:46:38.611629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:43.987 test case1: single bdev can't be used in multiple subsystems 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 [2024-12-06 18:46:38.647098] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:43.987 [2024-12-06 18:46:38.647124] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:43.987 [2024-12-06 18:46:38.647133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:43.987 request: 00:34:43.987 { 00:34:43.987 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:43.987 "namespace": { 00:34:43.987 "bdev_name": "Malloc0", 00:34:43.987 "no_auto_visible": false, 00:34:43.987 "hide_metadata": false 00:34:43.987 }, 00:34:43.987 "method": "nvmf_subsystem_add_ns", 00:34:43.987 "req_id": 1 00:34:43.987 } 00:34:43.987 Got JSON-RPC error response 00:34:43.987 response: 00:34:43.987 { 00:34:43.987 "code": -32602, 00:34:43.987 "message": "Invalid parameters" 00:34:43.987 } 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:43.987 Adding namespace failed - expected result. 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:43.987 test case2: host connect to nvmf target in multiple paths 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:43.987 [2024-12-06 18:46:38.659247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.987 18:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:44.562 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:44.824 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:44.824 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:44.824 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:44.824 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:44.824 18:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:47.373 18:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:47.373 [global] 00:34:47.373 thread=1 00:34:47.373 invalidate=1 00:34:47.373 rw=write 00:34:47.373 time_based=1 00:34:47.373 runtime=1 00:34:47.373 ioengine=libaio 00:34:47.373 direct=1 00:34:47.373 bs=4096 00:34:47.373 iodepth=1 00:34:47.373 norandommap=0 00:34:47.373 numjobs=1 00:34:47.373 00:34:47.373 verify_dump=1 00:34:47.373 verify_backlog=512 00:34:47.373 verify_state_save=0 00:34:47.373 do_verify=1 00:34:47.373 verify=crc32c-intel 00:34:47.373 [job0] 00:34:47.373 filename=/dev/nvme0n1 00:34:47.373 Could not set queue depth (nvme0n1) 00:34:47.373 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:47.373 fio-3.35 00:34:47.373 Starting 1 thread 00:34:48.333 00:34:48.333 job0: (groupid=0, jobs=1): err= 0: pid=2402245: Fri Dec 6 18:46:43 2024 00:34:48.333 read: IOPS=17, BW=70.2KiB/s (71.9kB/s)(72.0KiB/1025msec) 00:34:48.333 slat (nsec): min=10361, max=27053, avg=24745.17, stdev=3638.01 00:34:48.333 clat (usec): min=1013, max=42050, avg=39471.06, stdev=9607.05 00:34:48.333 lat (usec): min=1023, max=42075, avg=39495.80, stdev=9610.64 00:34:48.333 clat percentiles (usec): 00:34:48.333 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:34:48.333 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:48.333 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:48.333 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:48.333 | 99.99th=[42206] 00:34:48.333 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:34:48.333 slat (nsec): min=9631, max=66556, avg=28844.83, stdev=9532.13 00:34:48.333 clat (usec): min=248, max=788, avg=577.52, stdev=103.31 00:34:48.333 lat (usec): min=280, max=820, avg=606.36, stdev=107.17 00:34:48.333 clat percentiles (usec): 00:34:48.333 | 1.00th=[ 314], 5.00th=[ 383], 10.00th=[ 429], 20.00th=[ 490], 00:34:48.333 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:34:48.333 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:34:48.333 | 99.00th=[ 758], 99.50th=[ 766], 99.90th=[ 791], 99.95th=[ 791], 00:34:48.333 | 99.99th=[ 791] 00:34:48.333 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.333 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.333 lat (usec) : 250=0.19%, 500=21.13%, 750=73.21%, 1000=2.08% 00:34:48.333 lat (msec) : 2=0.19%, 50=3.21% 00:34:48.333 cpu : usr=0.49%, sys=1.66%, ctx=530, majf=0, minf=1 00:34:48.333 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.333 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.333 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.333 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.333 00:34:48.333 Run status group 0 (all jobs): 00:34:48.333 READ: bw=70.2KiB/s (71.9kB/s), 70.2KiB/s-70.2KiB/s (71.9kB/s-71.9kB/s), io=72.0KiB (73.7kB), run=1025-1025msec 00:34:48.333 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:34:48.333 00:34:48.333 Disk stats (read/write): 00:34:48.333 nvme0n1: ios=65/512, merge=0/0, ticks=647/275, in_queue=922, util=93.39% 00:34:48.333 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:48.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:48.610 rmmod nvme_tcp 00:34:48.610 rmmod nvme_fabrics 00:34:48.610 rmmod nvme_keyring 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2401109 ']' 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2401109 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2401109 ']' 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2401109 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:48.610 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401109 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401109' 00:34:48.890 killing process with pid 2401109 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2401109 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2401109 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:48.890 18:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.432 00:34:51.432 real 0m15.734s 00:34:51.432 user 0m34.300s 00:34:51.432 sys 0m7.525s 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:51.432 ************************************ 00:34:51.432 END TEST nvmf_nmic 00:34:51.432 ************************************ 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:51.432 ************************************ 00:34:51.432 START TEST nvmf_fio_target 00:34:51.432 ************************************ 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:51.432 * Looking for test storage... 00:34:51.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.432 --rc genhtml_branch_coverage=1 00:34:51.432 --rc genhtml_function_coverage=1 00:34:51.432 --rc genhtml_legend=1 00:34:51.432 --rc geninfo_all_blocks=1 00:34:51.432 --rc geninfo_unexecuted_blocks=1 00:34:51.432 00:34:51.432 ' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.432 --rc genhtml_branch_coverage=1 00:34:51.432 --rc genhtml_function_coverage=1 00:34:51.432 --rc genhtml_legend=1 00:34:51.432 --rc geninfo_all_blocks=1 00:34:51.432 --rc geninfo_unexecuted_blocks=1 00:34:51.432 00:34:51.432 ' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.432 --rc genhtml_branch_coverage=1 00:34:51.432 --rc genhtml_function_coverage=1 00:34:51.432 --rc genhtml_legend=1 00:34:51.432 --rc geninfo_all_blocks=1 00:34:51.432 --rc geninfo_unexecuted_blocks=1 00:34:51.432 00:34:51.432 ' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:51.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.432 --rc genhtml_branch_coverage=1 00:34:51.432 --rc genhtml_function_coverage=1 00:34:51.432 --rc genhtml_legend=1 00:34:51.432 --rc geninfo_all_blocks=1 00:34:51.432 --rc geninfo_unexecuted_blocks=1 00:34:51.432 00:34:51.432 ' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.432 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.433 18:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:59.580 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:59.581 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:59.581 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:59.581 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:59.581 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:59.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.691 ms 00:34:59.581 00:34:59.581 --- 10.0.0.2 ping statistics --- 00:34:59.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.581 rtt min/avg/max/mdev = 0.691/0.691/0.691/0.000 ms 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:59.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:34:59.581 00:34:59.581 --- 10.0.0.1 ping statistics --- 00:34:59.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.581 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2406584 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2406584 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2406584 ']' 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.581 18:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.581 [2024-12-06 18:46:53.510124] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:59.581 [2024-12-06 18:46:53.511265] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:34:59.581 [2024-12-06 18:46:53.511317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.581 [2024-12-06 18:46:53.611753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.581 [2024-12-06 18:46:53.664531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.581 [2024-12-06 18:46:53.664586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.581 [2024-12-06 18:46:53.664595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.581 [2024-12-06 18:46:53.664603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.581 [2024-12-06 18:46:53.664609] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.581 [2024-12-06 18:46:53.666683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.581 [2024-12-06 18:46:53.666811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:59.581 [2024-12-06 18:46:53.666972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.581 [2024-12-06 18:46:53.666973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.581 [2024-12-06 18:46:53.745906] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.581 [2024-12-06 18:46:53.747042] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:59.581 [2024-12-06 18:46:53.747326] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:59.581 [2024-12-06 18:46:53.747699] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.581 [2024-12-06 18:46:53.747747] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:59.581 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.581 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:59.581 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:59.581 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:59.581 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.841 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:59.841 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:59.841 [2024-12-06 18:46:54.532115] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.841 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:00.100 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:00.100 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:00.360 18:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:00.360 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:00.619 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:00.619 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:00.879 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:00.879 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:00.879 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:01.139 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:01.139 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:01.399 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:01.399 18:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:01.659 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:01.659 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:01.659 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:01.920 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:01.920 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:02.179 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:02.179 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:02.179 18:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.438 [2024-12-06 18:46:57.104038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.438 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:02.696 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:02.955 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:03.525 18:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:03.525 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:03.525 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:03.525 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:03.525 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:03.525 18:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:05.435 18:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:05.435 [global] 00:35:05.435 thread=1 00:35:05.435 invalidate=1 00:35:05.435 rw=write 00:35:05.435 time_based=1 00:35:05.435 runtime=1 00:35:05.435 ioengine=libaio 00:35:05.435 direct=1 00:35:05.435 bs=4096 00:35:05.435 iodepth=1 00:35:05.435 norandommap=0 00:35:05.435 numjobs=1 00:35:05.435 00:35:05.435 verify_dump=1 00:35:05.435 verify_backlog=512 00:35:05.435 verify_state_save=0 00:35:05.435 do_verify=1 00:35:05.435 verify=crc32c-intel 00:35:05.435 [job0] 00:35:05.435 filename=/dev/nvme0n1 00:35:05.436 [job1] 00:35:05.436 filename=/dev/nvme0n2 00:35:05.436 [job2] 00:35:05.436 filename=/dev/nvme0n3 00:35:05.436 [job3] 00:35:05.436 filename=/dev/nvme0n4 00:35:05.436 Could not set queue depth (nvme0n1) 00:35:05.436 Could not set queue depth (nvme0n2) 00:35:05.436 Could not set queue depth (nvme0n3) 00:35:05.436 Could not set queue depth (nvme0n4) 00:35:05.695 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:05.695 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:05.695 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:05.695 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:05.695 fio-3.35 00:35:05.695 Starting 4 threads 00:35:07.080 00:35:07.080 job0: (groupid=0, jobs=1): err= 0: pid=2408162: Fri Dec 6 18:47:01 2024 00:35:07.080 read: IOPS=15, BW=63.2KiB/s (64.8kB/s)(64.0KiB/1012msec) 00:35:07.080 slat (nsec): min=25495, max=26198, avg=25851.81, stdev=204.31 00:35:07.080 clat (usec): min=810, max=42025, avg=39335.29, stdev=10275.90 00:35:07.080 lat (usec): min=836, max=42051, avg=39361.14, stdev=10275.81 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[41157], 20.00th=[41681], 00:35:07.080 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:07.080 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:07.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:07.080 | 99.99th=[42206] 00:35:07.080 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:35:07.080 slat (usec): min=9, max=25779, avg=83.26, stdev=1137.86 00:35:07.080 clat (usec): min=193, max=1093, avg=651.88, stdev=143.41 00:35:07.080 lat (usec): min=228, max=26445, avg=735.14, stdev=1147.71 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 355], 5.00th=[ 424], 10.00th=[ 465], 20.00th=[ 515], 00:35:07.080 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 701], 00:35:07.080 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 832], 95.00th=[ 881], 00:35:07.080 | 99.00th=[ 971], 99.50th=[ 1045], 99.90th=[ 1090], 99.95th=[ 1090], 00:35:07.080 | 99.99th=[ 1090] 00:35:07.080 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:07.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:07.080 lat (usec) : 250=0.38%, 500=15.34%, 750=55.87%, 1000=25.00% 00:35:07.080 lat (msec) : 2=0.57%, 50=2.84% 00:35:07.080 cpu : usr=1.09%, sys=1.38%, ctx=534, majf=0, minf=1 00:35:07.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:07.080 job1: (groupid=0, jobs=1): err= 0: pid=2408168: Fri Dec 6 18:47:01 2024 00:35:07.080 read: IOPS=15, BW=63.3KiB/s (64.8kB/s)(64.0KiB/1011msec) 00:35:07.080 slat (nsec): min=25219, max=25739, avg=25383.94, stdev=140.88 00:35:07.080 clat (usec): min=1159, max=42062, avg=39055.46, stdev=10116.39 00:35:07.080 lat (usec): min=1184, max=42088, avg=39080.84, stdev=10116.38 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[40633], 20.00th=[41157], 00:35:07.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:35:07.080 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:07.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:07.080 | 99.99th=[42206] 00:35:07.080 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:35:07.080 slat (nsec): min=10354, max=66193, avg=33697.46, stdev=5151.71 00:35:07.080 clat (usec): min=399, max=1482, avg=710.74, stdev=124.38 00:35:07.080 lat (usec): min=432, max=1515, avg=744.44, stdev=124.75 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 469], 5.00th=[ 537], 10.00th=[ 570], 20.00th=[ 611], 00:35:07.080 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 725], 00:35:07.080 | 70.00th=[ 758], 80.00th=[ 799], 90.00th=[ 881], 95.00th=[ 930], 00:35:07.080 | 99.00th=[ 1012], 99.50th=[ 1221], 99.90th=[ 1483], 99.95th=[ 1483], 00:35:07.080 | 99.99th=[ 1483] 00:35:07.080 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:07.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:07.080 lat (usec) : 500=2.46%, 750=64.58%, 1000=28.41% 00:35:07.080 lat (msec) : 2=1.70%, 50=2.84% 00:35:07.080 cpu : usr=0.99%, sys=1.49%, ctx=528, majf=0, minf=1 00:35:07.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:07.080 job2: (groupid=0, jobs=1): err= 0: pid=2408170: Fri Dec 6 18:47:01 2024 00:35:07.080 read: IOPS=18, BW=75.7KiB/s (77.5kB/s)(76.0KiB/1004msec) 00:35:07.080 slat (nsec): min=28166, max=46662, avg=30043.21, stdev=4460.94 00:35:07.080 clat (usec): min=873, max=41544, avg=38878.39, stdev=9204.51 00:35:07.080 lat (usec): min=919, max=41572, avg=38908.44, stdev=9200.49 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 873], 5.00th=[ 873], 10.00th=[40633], 20.00th=[41157], 00:35:07.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:07.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:07.080 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:07.080 | 99.99th=[41681] 00:35:07.080 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:35:07.080 slat (usec): min=8, max=42002, avg=110.28, stdev=1855.09 00:35:07.080 clat (usec): min=118, max=745, avg=396.65, stdev=118.48 00:35:07.080 lat (usec): min=128, max=42314, avg=506.92, stdev=1855.33 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 163], 5.00th=[ 221], 10.00th=[ 251], 20.00th=[ 293], 00:35:07.080 | 30.00th=[ 322], 40.00th=[ 351], 50.00th=[ 396], 60.00th=[ 429], 00:35:07.080 | 70.00th=[ 457], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 603], 00:35:07.080 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 742], 99.95th=[ 742], 00:35:07.080 | 99.99th=[ 742] 00:35:07.080 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:07.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:07.080 lat (usec) : 250=9.60%, 500=68.93%, 750=17.89%, 1000=0.19% 00:35:07.080 lat (msec) : 50=3.39% 00:35:07.080 cpu : usr=0.70%, sys=1.89%, ctx=534, majf=0, minf=1 00:35:07.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:07.080 job3: (groupid=0, jobs=1): err= 0: pid=2408171: Fri Dec 6 18:47:01 2024 00:35:07.080 read: IOPS=15, BW=62.8KiB/s (64.3kB/s)(64.0KiB/1019msec) 00:35:07.080 slat (nsec): min=27795, max=28662, avg=28203.56, stdev=235.10 00:35:07.080 clat (usec): min=40957, max=42023, avg=41481.04, stdev=486.60 00:35:07.080 lat (usec): min=40985, max=42052, avg=41509.24, stdev=486.61 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:07.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:07.080 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:07.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:07.080 | 99.99th=[42206] 00:35:07.080 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:35:07.080 slat (usec): min=9, max=28749, avg=89.85, stdev=1269.27 00:35:07.080 clat (usec): min=170, max=938, avg=592.19, stdev=140.16 00:35:07.080 lat (usec): min=207, max=29379, avg=682.05, stdev=1278.85 00:35:07.080 clat percentiles (usec): 00:35:07.080 | 1.00th=[ 273], 5.00th=[ 338], 10.00th=[ 396], 20.00th=[ 482], 00:35:07.080 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 603], 60.00th=[ 635], 00:35:07.080 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 807], 00:35:07.080 | 99.00th=[ 906], 99.50th=[ 938], 99.90th=[ 938], 99.95th=[ 938], 00:35:07.080 | 99.99th=[ 938] 00:35:07.080 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:07.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:07.080 lat (usec) : 250=0.38%, 500=24.05%, 750=60.04%, 1000=12.50% 00:35:07.080 lat (msec) : 50=3.03% 00:35:07.080 cpu : usr=0.69%, sys=2.45%, ctx=532, majf=0, minf=1 00:35:07.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:07.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:07.080 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:07.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:07.080 00:35:07.080 Run status group 0 (all jobs): 00:35:07.080 READ: bw=263KiB/s (269kB/s), 62.8KiB/s-75.7KiB/s (64.3kB/s-77.5kB/s), io=268KiB (274kB), run=1004-1019msec 00:35:07.080 WRITE: bw=8039KiB/s (8232kB/s), 2010KiB/s-2040KiB/s (2058kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1019msec 00:35:07.080 00:35:07.080 Disk stats (read/write): 00:35:07.080 nvme0n1: ios=38/512, merge=0/0, ticks=1216/306, in_queue=1522, util=85.57% 00:35:07.080 nvme0n2: ios=60/512, merge=0/0, ticks=479/346, in_queue=825, util=84.81% 00:35:07.080 nvme0n3: ios=72/512, merge=0/0, ticks=840/167, in_queue=1007, util=95.41% 00:35:07.080 nvme0n4: ios=72/512, merge=0/0, ticks=1480/248, in_queue=1728, util=98.28% 00:35:07.080 18:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:07.080 [global] 00:35:07.080 thread=1 00:35:07.080 invalidate=1 00:35:07.080 rw=randwrite 00:35:07.080 time_based=1 00:35:07.080 runtime=1 00:35:07.080 ioengine=libaio 00:35:07.080 direct=1 00:35:07.080 bs=4096 00:35:07.080 iodepth=1 00:35:07.080 norandommap=0 00:35:07.080 numjobs=1 00:35:07.080 00:35:07.080 verify_dump=1 00:35:07.081 verify_backlog=512 00:35:07.081 verify_state_save=0 00:35:07.081 do_verify=1 00:35:07.081 verify=crc32c-intel 00:35:07.081 [job0] 00:35:07.081 filename=/dev/nvme0n1 00:35:07.081 [job1] 00:35:07.081 filename=/dev/nvme0n2 00:35:07.081 [job2] 00:35:07.081 filename=/dev/nvme0n3 00:35:07.081 [job3] 00:35:07.081 filename=/dev/nvme0n4 00:35:07.081 Could not set queue depth (nvme0n1) 00:35:07.081 Could not set queue depth (nvme0n2) 00:35:07.081 Could not set queue depth (nvme0n3) 00:35:07.081 Could not set queue depth (nvme0n4) 00:35:07.672 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.672 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.672 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.672 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.672 fio-3.35 00:35:07.672 Starting 4 threads 00:35:09.059 00:35:09.059 job0: (groupid=0, jobs=1): err= 0: pid=2408646: Fri Dec 6 18:47:03 2024 00:35:09.059 read: IOPS=18, BW=73.9KiB/s (75.7kB/s)(76.0KiB/1028msec) 00:35:09.059 slat (nsec): min=26180, max=27119, avg=26537.95, stdev=195.95 00:35:09.059 clat (usec): min=618, max=42074, avg=39644.01, stdev=9456.30 00:35:09.059 lat (usec): min=645, max=42101, avg=39670.55, stdev=9456.16 00:35:09.059 clat percentiles (usec): 00:35:09.059 | 1.00th=[ 619], 5.00th=[ 619], 10.00th=[41157], 20.00th=[41157], 00:35:09.059 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:09.059 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:09.059 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:09.059 | 99.99th=[42206] 00:35:09.059 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:35:09.059 slat (nsec): min=9622, max=87856, avg=27324.24, stdev=11624.27 00:35:09.059 clat (usec): min=124, max=861, avg=500.98, stdev=157.97 00:35:09.059 lat (usec): min=135, max=897, avg=528.30, stdev=164.55 00:35:09.059 clat percentiles (usec): 00:35:09.059 | 1.00th=[ 180], 5.00th=[ 247], 10.00th=[ 281], 20.00th=[ 355], 00:35:09.059 | 30.00th=[ 408], 40.00th=[ 465], 50.00th=[ 506], 60.00th=[ 553], 00:35:09.059 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 701], 95.00th=[ 742], 00:35:09.059 | 99.00th=[ 832], 99.50th=[ 840], 99.90th=[ 865], 99.95th=[ 865], 00:35:09.059 | 99.99th=[ 865] 00:35:09.059 bw ( KiB/s): min= 4087, max= 4087, per=41.75%, avg=4087.00, stdev= 0.00, samples=1 00:35:09.059 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:09.059 lat (usec) : 250=5.08%, 500=42.00%, 750=45.01%, 1000=4.52% 00:35:09.059 lat (msec) : 50=3.39% 00:35:09.059 cpu : usr=0.97%, sys=1.07%, ctx=535, majf=0, minf=1 00:35:09.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.059 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.059 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.059 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:09.059 job1: (groupid=0, jobs=1): err= 0: pid=2408663: Fri Dec 6 18:47:03 2024 00:35:09.059 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:09.059 slat (nsec): min=7747, max=44670, avg=26133.93, stdev=2651.63 00:35:09.059 clat (usec): min=523, max=1291, avg=1033.24, stdev=109.26 00:35:09.059 lat (usec): min=549, max=1317, avg=1059.37, stdev=109.34 00:35:09.059 clat percentiles (usec): 00:35:09.059 | 1.00th=[ 668], 5.00th=[ 807], 10.00th=[ 889], 20.00th=[ 971], 00:35:09.059 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:35:09.059 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:35:09.059 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1287], 99.95th=[ 1287], 00:35:09.059 | 99.99th=[ 1287] 00:35:09.059 write: IOPS=893, BW=3572KiB/s (3658kB/s)(3576KiB/1001msec); 0 zone resets 00:35:09.059 slat (nsec): min=9402, max=72595, avg=24241.59, stdev=11955.14 00:35:09.059 clat (usec): min=118, max=904, avg=476.59, stdev=169.40 00:35:09.059 lat (usec): min=128, max=937, avg=500.83, stdev=176.13 00:35:09.059 clat percentiles (usec): 00:35:09.059 | 1.00th=[ 167], 5.00th=[ 235], 10.00th=[ 262], 20.00th=[ 314], 00:35:09.059 | 30.00th=[ 351], 40.00th=[ 404], 50.00th=[ 465], 60.00th=[ 537], 00:35:09.059 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 750], 00:35:09.059 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:35:09.059 | 99.99th=[ 906] 00:35:09.059 bw ( KiB/s): min= 4087, max= 4087, per=41.75%, avg=4087.00, stdev= 0.00, samples=1 00:35:09.059 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:09.059 lat (usec) : 250=5.26%, 500=29.52%, 750=26.74%, 1000=12.02% 00:35:09.059 lat (msec) : 2=26.46% 00:35:09.059 cpu : usr=1.60%, sys=3.90%, ctx=1408, majf=0, minf=1 00:35:09.059 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.059 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 issued rwts: total=512,894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:09.060 job2: (groupid=0, jobs=1): err= 0: pid=2408681: Fri Dec 6 18:47:03 2024 00:35:09.060 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:09.060 slat (nsec): min=8811, max=61705, avg=27129.09, stdev=3735.34 00:35:09.060 clat (usec): min=784, max=1426, avg=1179.88, stdev=94.33 00:35:09.060 lat (usec): min=811, max=1466, avg=1207.01, stdev=94.54 00:35:09.060 clat percentiles (usec): 00:35:09.060 | 1.00th=[ 889], 5.00th=[ 979], 10.00th=[ 1057], 20.00th=[ 1123], 00:35:09.060 | 30.00th=[ 1156], 40.00th=[ 1172], 50.00th=[ 1188], 60.00th=[ 1205], 00:35:09.060 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1319], 00:35:09.060 | 99.00th=[ 1369], 99.50th=[ 1385], 99.90th=[ 1434], 99.95th=[ 1434], 00:35:09.060 | 99.99th=[ 1434] 00:35:09.060 write: IOPS=609, BW=2438KiB/s (2496kB/s)(2440KiB/1001msec); 0 zone resets 00:35:09.060 slat (nsec): min=9667, max=68186, avg=27775.96, stdev=11124.26 00:35:09.060 clat (usec): min=159, max=1030, avg=584.09, stdev=145.31 00:35:09.060 lat (usec): min=187, max=1080, avg=611.87, stdev=149.09 00:35:09.060 clat percentiles (usec): 00:35:09.060 | 1.00th=[ 247], 5.00th=[ 334], 10.00th=[ 383], 20.00th=[ 453], 00:35:09.060 | 30.00th=[ 506], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:35:09.060 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 791], 00:35:09.060 | 99.00th=[ 873], 99.50th=[ 988], 99.90th=[ 1029], 99.95th=[ 1029], 00:35:09.060 | 99.99th=[ 1029] 00:35:09.060 bw ( KiB/s): min= 4096, max= 4096, per=41.84%, avg=4096.00, stdev= 0.00, samples=1 00:35:09.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:09.060 lat (usec) : 250=0.62%, 500=14.53%, 750=32.26%, 1000=9.36% 00:35:09.060 lat (msec) : 2=43.23% 00:35:09.060 cpu : usr=1.80%, sys=3.00%, ctx=1124, majf=0, minf=2 00:35:09.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 issued rwts: total=512,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:09.060 job3: (groupid=0, jobs=1): err= 0: pid=2408688: Fri Dec 6 18:47:03 2024 00:35:09.060 read: IOPS=19, BW=77.4KiB/s (79.3kB/s)(80.0KiB/1033msec) 00:35:09.060 slat (nsec): min=26570, max=27645, avg=27206.25, stdev=276.93 00:35:09.060 clat (usec): min=911, max=41092, avg=38962.24, stdev=8956.68 00:35:09.060 lat (usec): min=937, max=41119, avg=38989.45, stdev=8956.83 00:35:09.060 clat percentiles (usec): 00:35:09.060 | 1.00th=[ 914], 5.00th=[ 914], 10.00th=[40633], 20.00th=[40633], 00:35:09.060 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:09.060 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:09.060 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:09.060 | 99.99th=[41157] 00:35:09.060 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:35:09.060 slat (nsec): min=9908, max=73238, avg=32492.22, stdev=7808.00 00:35:09.060 clat (usec): min=122, max=787, avg=453.23, stdev=123.47 00:35:09.060 lat (usec): min=133, max=860, avg=485.72, stdev=125.04 00:35:09.060 clat percentiles (usec): 00:35:09.060 | 1.00th=[ 206], 5.00th=[ 253], 10.00th=[ 318], 20.00th=[ 338], 00:35:09.060 | 30.00th=[ 363], 40.00th=[ 420], 50.00th=[ 453], 60.00th=[ 482], 00:35:09.060 | 70.00th=[ 506], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 676], 00:35:09.060 | 99.00th=[ 750], 99.50th=[ 783], 99.90th=[ 791], 99.95th=[ 791], 00:35:09.060 | 99.99th=[ 791] 00:35:09.060 bw ( KiB/s): min= 4096, max= 4096, per=41.84%, avg=4096.00, stdev= 0.00, samples=1 00:35:09.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:09.060 lat (usec) : 250=4.70%, 500=60.53%, 750=30.08%, 1000=1.13% 00:35:09.060 lat (msec) : 50=3.57% 00:35:09.060 cpu : usr=0.78%, sys=1.65%, ctx=533, majf=0, minf=2 00:35:09.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:09.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.060 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:09.060 00:35:09.060 Run status group 0 (all jobs): 00:35:09.060 READ: bw=4116KiB/s (4215kB/s), 73.9KiB/s-2046KiB/s (75.7kB/s-2095kB/s), io=4252KiB (4354kB), run=1001-1033msec 00:35:09.060 WRITE: bw=9789KiB/s (10.0MB/s), 1983KiB/s-3572KiB/s (2030kB/s-3658kB/s), io=9.88MiB (10.4MB), run=1001-1033msec 00:35:09.060 00:35:09.060 Disk stats (read/write): 00:35:09.060 nvme0n1: ios=54/512, merge=0/0, ticks=714/241, in_queue=955, util=87.37% 00:35:09.060 nvme0n2: ios=556/638, merge=0/0, ticks=690/260, in_queue=950, util=91.34% 00:35:09.060 nvme0n3: ios=487/512, merge=0/0, ticks=700/283, in_queue=983, util=93.67% 00:35:09.060 nvme0n4: ios=38/512, merge=0/0, ticks=1454/223, in_queue=1677, util=94.24% 00:35:09.060 18:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:09.060 [global] 00:35:09.060 thread=1 00:35:09.060 invalidate=1 00:35:09.060 rw=write 00:35:09.060 time_based=1 00:35:09.060 runtime=1 00:35:09.060 ioengine=libaio 00:35:09.060 direct=1 00:35:09.060 bs=4096 00:35:09.060 iodepth=128 00:35:09.060 norandommap=0 00:35:09.060 numjobs=1 00:35:09.060 00:35:09.060 verify_dump=1 00:35:09.060 verify_backlog=512 00:35:09.060 verify_state_save=0 00:35:09.060 do_verify=1 00:35:09.060 verify=crc32c-intel 00:35:09.060 [job0] 00:35:09.060 filename=/dev/nvme0n1 00:35:09.060 [job1] 00:35:09.060 filename=/dev/nvme0n2 00:35:09.060 [job2] 00:35:09.060 filename=/dev/nvme0n3 00:35:09.060 [job3] 00:35:09.060 filename=/dev/nvme0n4 00:35:09.060 Could not set queue depth (nvme0n1) 00:35:09.060 Could not set queue depth (nvme0n2) 00:35:09.060 Could not set queue depth (nvme0n3) 00:35:09.060 Could not set queue depth (nvme0n4) 00:35:09.320 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:09.320 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:09.320 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:09.320 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:09.320 fio-3.35 00:35:09.320 Starting 4 threads 00:35:10.723 00:35:10.723 job0: (groupid=0, jobs=1): err= 0: pid=2409101: Fri Dec 6 18:47:05 2024 00:35:10.723 read: IOPS=6577, BW=25.7MiB/s (26.9MB/s)(25.7MiB/1002msec) 00:35:10.723 slat (nsec): min=881, max=45505k, avg=74980.83, stdev=715592.85 00:35:10.723 clat (usec): min=1124, max=57073, avg=9794.46, stdev=7017.60 00:35:10.723 lat (usec): min=2848, max=57099, avg=9869.44, stdev=7052.74 00:35:10.723 clat percentiles (usec): 00:35:10.723 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7308], 00:35:10.723 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8356], 00:35:10.723 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[13304], 95.00th=[19006], 00:35:10.723 | 99.00th=[50070], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:35:10.723 | 99.99th=[56886] 00:35:10.723 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:35:10.723 slat (nsec): min=1513, max=13014k, avg=71015.46, stdev=523823.91 00:35:10.723 clat (usec): min=1340, max=43808, avg=9397.07, stdev=5827.26 00:35:10.723 lat (usec): min=1352, max=43837, avg=9468.09, stdev=5883.19 00:35:10.723 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 4293], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 6980], 00:35:10.724 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8160], 00:35:10.724 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[12387], 95.00th=[28705], 00:35:10.724 | 99.00th=[34866], 99.50th=[36439], 99.90th=[38011], 99.95th=[40633], 00:35:10.724 | 99.99th=[43779] 00:35:10.724 bw ( KiB/s): min=24576, max=28672, per=28.30%, avg=26624.00, stdev=2896.31, samples=2 00:35:10.724 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:35:10.724 lat (msec) : 2=0.08%, 4=0.80%, 10=85.05%, 20=8.98%, 50=4.61% 00:35:10.724 lat (msec) : 100=0.49% 00:35:10.724 cpu : usr=3.90%, sys=5.09%, ctx=570, majf=0, minf=1 00:35:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:10.724 issued rwts: total=6591,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:10.724 job1: (groupid=0, jobs=1): err= 0: pid=2409118: Fri Dec 6 18:47:05 2024 00:35:10.724 read: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(13.3MiB/1045msec) 00:35:10.724 slat (nsec): min=881, max=47683k, avg=154180.20, stdev=1421673.49 00:35:10.724 clat (usec): min=4729, max=66601, avg=21617.86, stdev=15989.75 00:35:10.724 lat (usec): min=4737, max=66605, avg=21772.04, stdev=16037.03 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 5211], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 7963], 00:35:10.724 | 30.00th=[ 9110], 40.00th=[13304], 50.00th=[16057], 60.00th=[17433], 00:35:10.724 | 70.00th=[26608], 80.00th=[33424], 90.00th=[48497], 95.00th=[54264], 00:35:10.724 | 99.00th=[63177], 99.50th=[66323], 99.90th=[66323], 99.95th=[66847], 00:35:10.724 | 99.99th=[66847] 00:35:10.724 write: IOPS=3429, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1045msec); 0 zone resets 00:35:10.724 slat (nsec): min=1542, max=15307k, avg=128501.93, stdev=904382.00 00:35:10.724 clat (usec): min=3243, max=58184, avg=16416.27, stdev=13469.05 00:35:10.724 lat (usec): min=3252, max=58187, avg=16544.78, stdev=13545.65 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 4817], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 7111], 00:35:10.724 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 9241], 60.00th=[12649], 00:35:10.724 | 70.00th=[15795], 80.00th=[27132], 90.00th=[41681], 95.00th=[45876], 00:35:10.724 | 99.00th=[51119], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:35:10.724 | 99.99th=[57934] 00:35:10.724 bw ( KiB/s): min=12288, max=16416, per=15.26%, avg=14352.00, stdev=2918.94, samples=2 00:35:10.724 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:35:10.724 lat (msec) : 4=0.14%, 10=43.47%, 20=26.75%, 50=24.25%, 100=5.39% 00:35:10.724 cpu : usr=2.39%, sys=2.78%, ctx=343, majf=0, minf=1 00:35:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:10.724 issued rwts: total=3398,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:10.724 job2: (groupid=0, jobs=1): err= 0: pid=2409137: Fri Dec 6 18:47:05 2024 00:35:10.724 read: IOPS=6602, BW=25.8MiB/s (27.0MB/s)(25.8MiB/1002msec) 00:35:10.724 slat (nsec): min=954, max=8250.4k, avg=69279.00, stdev=499040.97 00:35:10.724 clat (usec): min=1082, max=21314, avg=9337.29, stdev=2380.25 00:35:10.724 lat (usec): min=1611, max=21321, avg=9406.57, stdev=2412.33 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 7308], 20.00th=[ 8029], 00:35:10.724 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:35:10.724 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[12125], 95.00th=[13698], 00:35:10.724 | 99.00th=[17433], 99.50th=[20317], 99.90th=[21365], 99.95th=[21365], 00:35:10.724 | 99.99th=[21365] 00:35:10.724 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:35:10.724 slat (nsec): min=1632, max=21985k, avg=71174.29, stdev=503565.00 00:35:10.724 clat (usec): min=1197, max=29675, avg=9272.15, stdev=3270.14 00:35:10.724 lat (usec): min=1233, max=29726, avg=9343.32, stdev=3304.44 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 2409], 5.00th=[ 4883], 10.00th=[ 5538], 20.00th=[ 7439], 00:35:10.724 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 9110], 00:35:10.724 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[13304], 95.00th=[16712], 00:35:10.724 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:35:10.724 | 99.99th=[29754] 00:35:10.724 bw ( KiB/s): min=24576, max=28672, per=28.30%, avg=26624.00, stdev=2896.31, samples=2 00:35:10.724 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:35:10.724 lat (msec) : 2=0.62%, 4=0.72%, 10=70.85%, 20=27.38%, 50=0.44% 00:35:10.724 cpu : usr=4.20%, sys=7.79%, ctx=536, majf=0, minf=1 00:35:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:10.724 issued rwts: total=6616,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:10.724 job3: (groupid=0, jobs=1): err= 0: pid=2409143: Fri Dec 6 18:47:05 2024 00:35:10.724 read: IOPS=7454, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1004msec) 00:35:10.724 slat (nsec): min=955, max=7226.0k, avg=65386.32, stdev=510442.56 00:35:10.724 clat (usec): min=940, max=16384, avg=9096.43, stdev=2241.72 00:35:10.724 lat (usec): min=1509, max=16650, avg=9161.82, stdev=2277.67 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 2704], 5.00th=[ 5800], 10.00th=[ 6849], 20.00th=[ 7701], 00:35:10.724 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9110], 00:35:10.724 | 70.00th=[ 9634], 80.00th=[10945], 90.00th=[12125], 95.00th=[13304], 00:35:10.724 | 99.00th=[15008], 99.50th=[15926], 99.90th=[16057], 99.95th=[16188], 00:35:10.724 | 99.99th=[16450] 00:35:10.724 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:35:10.724 slat (nsec): min=1622, max=10199k, avg=57596.64, stdev=439550.84 00:35:10.724 clat (usec): min=923, max=17127, avg=7725.29, stdev=2443.43 00:35:10.724 lat (usec): min=925, max=17135, avg=7782.89, stdev=2459.25 00:35:10.724 clat percentiles (usec): 00:35:10.724 | 1.00th=[ 1795], 5.00th=[ 4146], 10.00th=[ 4686], 20.00th=[ 5538], 00:35:10.724 | 30.00th=[ 6652], 40.00th=[ 7308], 50.00th=[ 7701], 60.00th=[ 8029], 00:35:10.724 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[11207], 95.00th=[11600], 00:35:10.724 | 99.00th=[13698], 99.50th=[16319], 99.90th=[16319], 99.95th=[16450], 00:35:10.724 | 99.99th=[17171] 00:35:10.724 bw ( KiB/s): min=28672, max=32768, per=32.66%, avg=30720.00, stdev=2896.31, samples=2 00:35:10.724 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:35:10.724 lat (usec) : 1000=0.05% 00:35:10.724 lat (msec) : 2=0.74%, 4=2.66%, 10=74.29%, 20=22.26% 00:35:10.724 cpu : usr=6.38%, sys=7.58%, ctx=439, majf=0, minf=1 00:35:10.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:10.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:10.724 issued rwts: total=7484,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:10.724 00:35:10.724 Run status group 0 (all jobs): 00:35:10.724 READ: bw=90.0MiB/s (94.4MB/s), 12.7MiB/s-29.1MiB/s (13.3MB/s-30.5MB/s), io=94.1MiB (98.7MB), run=1002-1045msec 00:35:10.724 WRITE: bw=91.9MiB/s (96.3MB/s), 13.4MiB/s-29.9MiB/s (14.0MB/s-31.3MB/s), io=96.0MiB (101MB), run=1002-1045msec 00:35:10.724 00:35:10.724 Disk stats (read/write): 00:35:10.724 nvme0n1: ios=6091/6144, merge=0/0, ticks=24812/22695, in_queue=47507, util=87.88% 00:35:10.724 nvme0n2: ios=2577/2560, merge=0/0, ticks=16696/13829, in_queue=30525, util=88.69% 00:35:10.724 nvme0n3: ios=5193/5632, merge=0/0, ticks=28001/28457, in_queue=56458, util=96.84% 00:35:10.724 nvme0n4: ios=6188/6644, merge=0/0, ticks=51798/47482, in_queue=99280, util=92.00% 00:35:10.724 18:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:10.724 [global] 00:35:10.724 thread=1 00:35:10.724 invalidate=1 00:35:10.724 rw=randwrite 00:35:10.724 time_based=1 00:35:10.724 runtime=1 00:35:10.724 ioengine=libaio 00:35:10.724 direct=1 00:35:10.724 bs=4096 00:35:10.724 iodepth=128 00:35:10.724 norandommap=0 00:35:10.724 numjobs=1 00:35:10.724 00:35:10.724 verify_dump=1 00:35:10.724 verify_backlog=512 00:35:10.724 verify_state_save=0 00:35:10.724 do_verify=1 00:35:10.724 verify=crc32c-intel 00:35:10.724 [job0] 00:35:10.724 filename=/dev/nvme0n1 00:35:10.724 [job1] 00:35:10.724 filename=/dev/nvme0n2 00:35:10.724 [job2] 00:35:10.724 filename=/dev/nvme0n3 00:35:10.724 [job3] 00:35:10.724 filename=/dev/nvme0n4 00:35:10.724 Could not set queue depth (nvme0n1) 00:35:10.724 Could not set queue depth (nvme0n2) 00:35:10.724 Could not set queue depth (nvme0n3) 00:35:10.724 Could not set queue depth (nvme0n4) 00:35:10.988 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:10.988 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:10.988 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:10.988 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:10.988 fio-3.35 00:35:10.988 Starting 4 threads 00:35:12.397 00:35:12.397 job0: (groupid=0, jobs=1): err= 0: pid=2409582: Fri Dec 6 18:47:06 2024 00:35:12.397 read: IOPS=5669, BW=22.1MiB/s (23.2MB/s)(22.4MiB/1010msec) 00:35:12.397 slat (nsec): min=912, max=17165k, avg=72634.06, stdev=657135.22 00:35:12.397 clat (usec): min=2499, max=42728, avg=10585.50, stdev=6419.16 00:35:12.397 lat (usec): min=2805, max=42751, avg=10658.13, stdev=6476.37 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 3523], 5.00th=[ 4948], 10.00th=[ 5342], 20.00th=[ 6194], 00:35:12.397 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9634], 00:35:12.397 | 70.00th=[11338], 80.00th=[12780], 90.00th=[19530], 95.00th=[26084], 00:35:12.397 | 99.00th=[34866], 99.50th=[37487], 99.90th=[41681], 99.95th=[41681], 00:35:12.397 | 99.99th=[42730] 00:35:12.397 write: IOPS=6691, BW=26.1MiB/s (27.4MB/s)(26.4MiB/1010msec); 0 zone resets 00:35:12.397 slat (nsec): min=1549, max=15839k, avg=67492.96, stdev=645093.36 00:35:12.397 clat (usec): min=811, max=70132, avg=9996.95, stdev=7190.04 00:35:12.397 lat (usec): min=818, max=75211, avg=10064.44, stdev=7244.46 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 2024], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4424], 00:35:12.397 | 30.00th=[ 5473], 40.00th=[ 6915], 50.00th=[ 8160], 60.00th=[ 8848], 00:35:12.397 | 70.00th=[10683], 80.00th=[14746], 90.00th=[21627], 95.00th=[25560], 00:35:12.397 | 99.00th=[31589], 99.50th=[35390], 99.90th=[69731], 99.95th=[69731], 00:35:12.397 | 99.99th=[69731] 00:35:12.397 bw ( KiB/s): min=24368, max=28672, per=28.63%, avg=26520.00, stdev=3043.39, samples=2 00:35:12.397 iops : min= 6092, max= 7168, avg=6630.00, stdev=760.85, samples=2 00:35:12.397 lat (usec) : 1000=0.09% 00:35:12.397 lat (msec) : 2=0.39%, 4=8.23%, 10=56.64%, 20=24.26%, 50=10.33% 00:35:12.397 lat (msec) : 100=0.06% 00:35:12.397 cpu : usr=5.05%, sys=6.54%, ctx=367, majf=0, minf=1 00:35:12.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:12.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:12.397 issued rwts: total=5726,6758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:12.397 job1: (groupid=0, jobs=1): err= 0: pid=2409587: Fri Dec 6 18:47:06 2024 00:35:12.397 read: IOPS=5456, BW=21.3MiB/s (22.3MB/s)(22.2MiB/1043msec) 00:35:12.397 slat (nsec): min=907, max=18957k, avg=81032.86, stdev=702620.72 00:35:12.397 clat (usec): min=3121, max=45942, avg=11664.63, stdev=8519.57 00:35:12.397 lat (usec): min=3124, max=46479, avg=11745.66, stdev=8568.25 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 4113], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6325], 00:35:12.397 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 8094], 60.00th=[ 9110], 00:35:12.397 | 70.00th=[10945], 80.00th=[15664], 90.00th=[22938], 95.00th=[31851], 00:35:12.397 | 99.00th=[42730], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:35:12.397 | 99.99th=[45876] 00:35:12.397 write: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1043msec); 0 zone resets 00:35:12.397 slat (nsec): min=1489, max=29014k, avg=78178.15, stdev=732339.31 00:35:12.397 clat (usec): min=611, max=104720, avg=10739.33, stdev=10252.89 00:35:12.397 lat (usec): min=617, max=104728, avg=10817.51, stdev=10324.01 00:35:12.397 clat percentiles (msec): 00:35:12.397 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:35:12.397 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 9], 00:35:12.397 | 70.00th=[ 11], 80.00th=[ 13], 90.00th=[ 21], 95.00th=[ 28], 00:35:12.397 | 99.00th=[ 54], 99.50th=[ 63], 99.90th=[ 105], 99.95th=[ 105], 00:35:12.397 | 99.99th=[ 105] 00:35:12.397 bw ( KiB/s): min=19408, max=29200, per=26.24%, avg=24304.00, stdev=6923.99, samples=2 00:35:12.397 iops : min= 4852, max= 7300, avg=6076.00, stdev=1731.00, samples=2 00:35:12.397 lat (usec) : 750=0.03% 00:35:12.397 lat (msec) : 2=0.21%, 4=2.57%, 10=63.71%, 20=21.20%, 50=11.54% 00:35:12.397 lat (msec) : 100=0.60%, 250=0.14% 00:35:12.397 cpu : usr=3.74%, sys=5.09%, ctx=399, majf=0, minf=2 00:35:12.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:12.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:12.397 issued rwts: total=5691,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:12.397 job2: (groupid=0, jobs=1): err= 0: pid=2409601: Fri Dec 6 18:47:06 2024 00:35:12.397 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:35:12.397 slat (nsec): min=908, max=11458k, avg=76825.78, stdev=489674.25 00:35:12.397 clat (usec): min=4558, max=72553, avg=10763.29, stdev=7400.17 00:35:12.397 lat (usec): min=4567, max=75523, avg=10840.12, stdev=7442.99 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7439], 00:35:12.397 | 30.00th=[ 8029], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9634], 00:35:12.397 | 70.00th=[10421], 80.00th=[11731], 90.00th=[13829], 95.00th=[18482], 00:35:12.397 | 99.00th=[44827], 99.50th=[61604], 99.90th=[70779], 99.95th=[72877], 00:35:12.397 | 99.99th=[72877] 00:35:12.397 write: IOPS=5857, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1007msec); 0 zone resets 00:35:12.397 slat (nsec): min=1649, max=10635k, avg=91611.35, stdev=566520.38 00:35:12.397 clat (usec): min=1234, max=87036, avg=11390.58, stdev=11903.12 00:35:12.397 lat (usec): min=1246, max=87044, avg=11482.19, stdev=11995.66 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 4948], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7242], 00:35:12.397 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9241], 00:35:12.397 | 70.00th=[10028], 80.00th=[10945], 90.00th=[13829], 95.00th=[22152], 00:35:12.397 | 99.00th=[78119], 99.50th=[81265], 99.90th=[86508], 99.95th=[87557], 00:35:12.397 | 99.99th=[87557] 00:35:12.397 bw ( KiB/s): min=18168, max=27992, per=24.92%, avg=23080.00, stdev=6946.62, samples=2 00:35:12.397 iops : min= 4542, max= 6998, avg=5770.00, stdev=1736.65, samples=2 00:35:12.397 lat (msec) : 2=0.02%, 4=0.01%, 10=67.06%, 20=27.88%, 50=3.03% 00:35:12.397 lat (msec) : 100=2.00% 00:35:12.397 cpu : usr=3.28%, sys=6.66%, ctx=476, majf=0, minf=1 00:35:12.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:35:12.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:12.397 issued rwts: total=5632,5898,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:12.397 job3: (groupid=0, jobs=1): err= 0: pid=2409608: Fri Dec 6 18:47:06 2024 00:35:12.397 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:35:12.397 slat (nsec): min=945, max=14431k, avg=80867.85, stdev=638217.49 00:35:12.397 clat (usec): min=2806, max=38927, avg=11381.17, stdev=4410.70 00:35:12.397 lat (usec): min=2815, max=38930, avg=11462.04, stdev=4452.69 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 3916], 5.00th=[ 6390], 10.00th=[ 7635], 20.00th=[ 8356], 00:35:12.397 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[11207], 00:35:12.397 | 70.00th=[12256], 80.00th=[13435], 90.00th=[17433], 95.00th=[19792], 00:35:12.397 | 99.00th=[27657], 99.50th=[34341], 99.90th=[38011], 99.95th=[39060], 00:35:12.397 | 99.99th=[39060] 00:35:12.397 write: IOPS=5326, BW=20.8MiB/s (21.8MB/s)(20.9MiB/1005msec); 0 zone resets 00:35:12.397 slat (nsec): min=1561, max=18951k, avg=96276.03, stdev=629860.28 00:35:12.397 clat (usec): min=1241, max=47949, avg=12945.72, stdev=9967.27 00:35:12.397 lat (usec): min=1252, max=47958, avg=13041.99, stdev=10033.62 00:35:12.397 clat percentiles (usec): 00:35:12.397 | 1.00th=[ 2606], 5.00th=[ 4948], 10.00th=[ 5669], 20.00th=[ 6456], 00:35:12.397 | 30.00th=[ 7504], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9634], 00:35:12.397 | 70.00th=[10945], 80.00th=[19006], 90.00th=[31065], 95.00th=[35914], 00:35:12.397 | 99.00th=[44827], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:35:12.397 | 99.99th=[47973] 00:35:12.397 bw ( KiB/s): min=20480, max=21320, per=22.56%, avg=20900.00, stdev=593.97, samples=2 00:35:12.397 iops : min= 5120, max= 5330, avg=5225.00, stdev=148.49, samples=2 00:35:12.397 lat (msec) : 2=0.27%, 4=1.21%, 10=54.16%, 20=32.08%, 50=12.28% 00:35:12.397 cpu : usr=3.29%, sys=5.78%, ctx=367, majf=0, minf=2 00:35:12.397 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:12.397 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.397 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:12.397 issued rwts: total=5120,5353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.397 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:12.398 00:35:12.398 Run status group 0 (all jobs): 00:35:12.398 READ: bw=83.0MiB/s (87.1MB/s), 19.9MiB/s-22.1MiB/s (20.9MB/s-23.2MB/s), io=86.6MiB (90.8MB), run=1005-1043msec 00:35:12.398 WRITE: bw=90.5MiB/s (94.9MB/s), 20.8MiB/s-26.1MiB/s (21.8MB/s-27.4MB/s), io=94.3MiB (98.9MB), run=1005-1043msec 00:35:12.398 00:35:12.398 Disk stats (read/write): 00:35:12.398 nvme0n1: ios=4658/5033, merge=0/0, ticks=37272/38724, in_queue=75996, util=95.49% 00:35:12.398 nvme0n2: ios=5150/5356, merge=0/0, ticks=37833/30699, in_queue=68532, util=86.43% 00:35:12.398 nvme0n3: ios=5530/5632, merge=0/0, ticks=21401/21972, in_queue=43373, util=88.38% 00:35:12.398 nvme0n4: ios=3901/4096, merge=0/0, ticks=44790/56362, in_queue=101152, util=89.52% 00:35:12.398 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:12.398 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2409757 00:35:12.398 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:12.398 18:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:12.398 [global] 00:35:12.398 thread=1 00:35:12.398 invalidate=1 00:35:12.398 rw=read 00:35:12.398 time_based=1 00:35:12.398 runtime=10 00:35:12.398 ioengine=libaio 00:35:12.398 direct=1 00:35:12.398 bs=4096 00:35:12.398 iodepth=1 00:35:12.398 norandommap=1 00:35:12.398 numjobs=1 00:35:12.398 00:35:12.398 [job0] 00:35:12.398 filename=/dev/nvme0n1 00:35:12.398 [job1] 00:35:12.398 filename=/dev/nvme0n2 00:35:12.398 [job2] 00:35:12.398 filename=/dev/nvme0n3 00:35:12.398 [job3] 00:35:12.398 filename=/dev/nvme0n4 00:35:12.398 Could not set queue depth (nvme0n1) 00:35:12.398 Could not set queue depth (nvme0n2) 00:35:12.398 Could not set queue depth (nvme0n3) 00:35:12.398 Could not set queue depth (nvme0n4) 00:35:12.661 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.661 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.661 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.661 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.661 fio-3.35 00:35:12.661 Starting 4 threads 00:35:15.205 18:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:15.465 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:15.465 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=1204224, buflen=4096 00:35:15.465 fio: pid=2410098, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:15.465 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:15.465 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:15.465 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4280320, buflen=4096 00:35:15.465 fio: pid=2410086, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:15.726 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:15.726 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:15.726 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1269760, buflen=4096 00:35:15.726 fio: pid=2410032, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:15.989 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:15.989 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:15.989 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6942720, buflen=4096 00:35:15.989 fio: pid=2410055, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:15.989 00:35:15.989 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2410032: Fri Dec 6 18:47:10 2024 00:35:15.989 read: IOPS=106, BW=426KiB/s (436kB/s)(1240KiB/2911msec) 00:35:15.989 slat (usec): min=7, max=27618, avg=162.27, stdev=1767.67 00:35:15.989 clat (usec): min=598, max=45038, avg=9153.77, stdev=16341.50 00:35:15.989 lat (usec): min=606, max=68951, avg=9316.48, stdev=16702.87 00:35:15.989 clat percentiles (usec): 00:35:15.989 | 1.00th=[ 742], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 955], 00:35:15.989 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:35:15.989 | 70.00th=[ 1090], 80.00th=[ 1369], 90.00th=[41681], 95.00th=[42206], 00:35:15.989 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:35:15.989 | 99.99th=[44827] 00:35:15.989 bw ( KiB/s): min= 96, max= 2016, per=11.14%, avg=481.60, stdev=857.76, samples=5 00:35:15.989 iops : min= 24, max= 504, avg=120.40, stdev=214.44, samples=5 00:35:15.989 lat (usec) : 750=1.29%, 1000=39.87% 00:35:15.989 lat (msec) : 2=38.59%, 50=19.94% 00:35:15.989 cpu : usr=0.10%, sys=0.48%, ctx=313, majf=0, minf=1 00:35:15.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 issued rwts: total=311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.989 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2410055: Fri Dec 6 18:47:10 2024 00:35:15.989 read: IOPS=547, BW=2189KiB/s (2242kB/s)(6780KiB/3097msec) 00:35:15.989 slat (usec): min=6, max=33485, avg=57.94, stdev=914.68 00:35:15.989 clat (usec): min=151, max=42068, avg=1750.11, stdev=6315.73 00:35:15.989 lat (usec): min=158, max=47052, avg=1808.07, stdev=6397.24 00:35:15.989 clat percentiles (usec): 00:35:15.989 | 1.00th=[ 449], 5.00th=[ 529], 10.00th=[ 545], 20.00th=[ 594], 00:35:15.989 | 30.00th=[ 668], 40.00th=[ 734], 50.00th=[ 783], 60.00th=[ 840], 00:35:15.989 | 70.00th=[ 873], 80.00th=[ 898], 90.00th=[ 922], 95.00th=[ 963], 00:35:15.989 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:15.989 | 99.99th=[42206] 00:35:15.989 bw ( KiB/s): min= 96, max= 5096, per=51.98%, avg=2245.83, stdev=2291.44, samples=6 00:35:15.989 iops : min= 24, max= 1274, avg=561.33, stdev=572.91, samples=6 00:35:15.989 lat (usec) : 250=0.12%, 500=1.53%, 750=40.51%, 1000=54.48% 00:35:15.989 lat (msec) : 2=0.83%, 4=0.06%, 50=2.42% 00:35:15.989 cpu : usr=0.55%, sys=1.65%, ctx=1699, majf=0, minf=2 00:35:15.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.989 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2410086: Fri Dec 6 18:47:10 2024 00:35:15.989 read: IOPS=383, BW=1534KiB/s (1571kB/s)(4180KiB/2725msec) 00:35:15.989 slat (usec): min=7, max=9713, avg=36.23, stdev=299.52 00:35:15.989 clat (usec): min=732, max=42360, avg=2543.65, stdev=7807.30 00:35:15.989 lat (usec): min=759, max=51019, avg=2579.89, stdev=7859.02 00:35:15.989 clat percentiles (usec): 00:35:15.989 | 1.00th=[ 816], 5.00th=[ 873], 10.00th=[ 914], 20.00th=[ 947], 00:35:15.989 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:35:15.989 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:35:15.989 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:15.989 | 99.99th=[42206] 00:35:15.989 bw ( KiB/s): min= 96, max= 3888, per=38.53%, avg=1664.00, stdev=1605.62, samples=5 00:35:15.989 iops : min= 24, max= 972, avg=416.00, stdev=401.41, samples=5 00:35:15.989 lat (usec) : 750=0.10%, 1000=53.82% 00:35:15.989 lat (msec) : 2=42.16%, 50=3.82% 00:35:15.989 cpu : usr=1.10%, sys=1.14%, ctx=1048, majf=0, minf=2 00:35:15.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.989 issued rwts: total=1046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.989 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2410098: Fri Dec 6 18:47:10 2024 00:35:15.989 read: IOPS=114, BW=458KiB/s (469kB/s)(1176KiB/2567msec) 00:35:15.989 slat (nsec): min=3606, max=43026, avg=21813.18, stdev=8684.35 00:35:15.989 clat (usec): min=700, max=41208, avg=8628.60, stdev=15727.82 00:35:15.989 lat (usec): min=727, max=41217, avg=8650.37, stdev=15719.86 00:35:15.989 clat percentiles (usec): 00:35:15.989 | 1.00th=[ 766], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 963], 00:35:15.989 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1057], 00:35:15.989 | 70.00th=[ 1090], 80.00th=[ 1237], 90.00th=[41157], 95.00th=[41157], 00:35:15.989 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:15.989 | 99.99th=[41157] 00:35:15.989 bw ( KiB/s): min= 96, max= 1952, per=10.81%, avg=467.20, stdev=830.03, samples=5 00:35:15.989 iops : min= 24, max= 488, avg=116.80, stdev=207.51, samples=5 00:35:15.989 lat (usec) : 750=0.68%, 1000=32.88% 00:35:15.989 lat (msec) : 2=47.12%, 50=18.98% 00:35:15.989 cpu : usr=0.12%, sys=0.27%, ctx=295, majf=0, minf=2 00:35:15.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.990 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.990 issued rwts: total=295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:15.990 00:35:15.990 Run status group 0 (all jobs): 00:35:15.990 READ: bw=4319KiB/s (4423kB/s), 426KiB/s-2189KiB/s (436kB/s-2242kB/s), io=13.1MiB (13.7MB), run=2567-3097msec 00:35:15.990 00:35:15.990 Disk stats (read/write): 00:35:15.990 nvme0n1: ios=307/0, merge=0/0, ticks=2705/0, in_queue=2705, util=91.39% 00:35:15.990 nvme0n2: ios=1693/0, merge=0/0, ticks=2829/0, in_queue=2829, util=92.59% 00:35:15.990 nvme0n3: ios=1040/0, merge=0/0, ticks=2353/0, in_queue=2353, util=95.47% 00:35:15.990 nvme0n4: ios=293/0, merge=0/0, ticks=2496/0, in_queue=2496, util=96.30% 00:35:15.990 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:15.990 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:16.250 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:16.250 18:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:16.510 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:16.510 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2409757 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:16.770 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:17.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:17.030 nvmf hotplug test: fio failed as expected 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:17.030 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:17.291 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:17.291 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:17.291 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:17.291 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:17.292 rmmod nvme_tcp 00:35:17.292 rmmod nvme_fabrics 00:35:17.292 rmmod nvme_keyring 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2406584 ']' 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2406584 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2406584 ']' 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2406584 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2406584 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2406584' 00:35:17.292 killing process with pid 2406584 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2406584 00:35:17.292 18:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2406584 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:17.292 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:17.553 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:17.553 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.553 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:17.553 18:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.466 00:35:19.466 real 0m28.434s 00:35:19.466 user 2m24.058s 00:35:19.466 sys 0m12.311s 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:19.466 ************************************ 00:35:19.466 END TEST nvmf_fio_target 00:35:19.466 ************************************ 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.466 ************************************ 00:35:19.466 START TEST nvmf_bdevio 00:35:19.466 ************************************ 00:35:19.466 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:19.729 * Looking for test storage... 00:35:19.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.729 --rc genhtml_branch_coverage=1 00:35:19.729 --rc genhtml_function_coverage=1 00:35:19.729 --rc genhtml_legend=1 00:35:19.729 --rc geninfo_all_blocks=1 00:35:19.729 --rc geninfo_unexecuted_blocks=1 00:35:19.729 00:35:19.729 ' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.729 --rc genhtml_branch_coverage=1 00:35:19.729 --rc genhtml_function_coverage=1 00:35:19.729 --rc genhtml_legend=1 00:35:19.729 --rc geninfo_all_blocks=1 00:35:19.729 --rc geninfo_unexecuted_blocks=1 00:35:19.729 00:35:19.729 ' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.729 --rc genhtml_branch_coverage=1 00:35:19.729 --rc genhtml_function_coverage=1 00:35:19.729 --rc genhtml_legend=1 00:35:19.729 --rc geninfo_all_blocks=1 00:35:19.729 --rc geninfo_unexecuted_blocks=1 00:35:19.729 00:35:19.729 ' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:19.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.729 --rc genhtml_branch_coverage=1 00:35:19.729 --rc genhtml_function_coverage=1 00:35:19.729 --rc genhtml_legend=1 00:35:19.729 --rc geninfo_all_blocks=1 00:35:19.729 --rc geninfo_unexecuted_blocks=1 00:35:19.729 00:35:19.729 ' 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.729 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.730 18:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.875 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:35:27.876 00:35:27.876 --- 10.0.0.2 ping statistics --- 00:35:27.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.876 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:35:27.876 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:35:27.876 00:35:27.876 --- 10.0.0.1 ping statistics --- 00:35:27.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.877 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2415058 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2415058 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2415058 ']' 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.877 18:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:27.877 [2024-12-06 18:47:21.974583] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:27.877 [2024-12-06 18:47:21.975724] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:35:27.877 [2024-12-06 18:47:21.975775] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.877 [2024-12-06 18:47:22.074803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:27.877 [2024-12-06 18:47:22.127260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.877 [2024-12-06 18:47:22.127315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.877 [2024-12-06 18:47:22.127324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.877 [2024-12-06 18:47:22.127331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.877 [2024-12-06 18:47:22.127337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.877 [2024-12-06 18:47:22.129716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:27.877 [2024-12-06 18:47:22.129914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:27.877 [2024-12-06 18:47:22.130247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:27.877 [2024-12-06 18:47:22.130249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.877 [2024-12-06 18:47:22.207990] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:27.877 [2024-12-06 18:47:22.209179] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.877 [2024-12-06 18:47:22.209301] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:27.877 [2024-12-06 18:47:22.209783] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:27.877 [2024-12-06 18:47:22.209823] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.138 [2024-12-06 18:47:22.831254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.138 Malloc0 00:35:28.138 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.139 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:28.400 [2024-12-06 18:47:22.923544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:28.400 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:28.400 { 00:35:28.400 "params": { 00:35:28.400 "name": "Nvme$subsystem", 00:35:28.400 "trtype": "$TEST_TRANSPORT", 00:35:28.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.400 "adrfam": "ipv4", 00:35:28.400 "trsvcid": "$NVMF_PORT", 00:35:28.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.401 "hdgst": ${hdgst:-false}, 00:35:28.401 "ddgst": ${ddgst:-false} 00:35:28.401 }, 00:35:28.401 "method": "bdev_nvme_attach_controller" 00:35:28.401 } 00:35:28.401 EOF 00:35:28.401 )") 00:35:28.401 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:28.401 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:28.401 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:28.401 18:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:28.401 "params": { 00:35:28.401 "name": "Nvme1", 00:35:28.401 "trtype": "tcp", 00:35:28.401 "traddr": "10.0.0.2", 00:35:28.401 "adrfam": "ipv4", 00:35:28.401 "trsvcid": "4420", 00:35:28.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:28.401 "hdgst": false, 00:35:28.401 "ddgst": false 00:35:28.401 }, 00:35:28.401 "method": "bdev_nvme_attach_controller" 00:35:28.401 }' 00:35:28.401 [2024-12-06 18:47:22.981856] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:35:28.401 [2024-12-06 18:47:22.981930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415311 ] 00:35:28.401 [2024-12-06 18:47:23.076040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:28.401 [2024-12-06 18:47:23.132480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:28.401 [2024-12-06 18:47:23.132652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.401 [2024-12-06 18:47:23.132671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:28.663 I/O targets: 00:35:28.663 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:28.663 00:35:28.663 00:35:28.663 CUnit - A unit testing framework for C - Version 2.1-3 00:35:28.663 http://cunit.sourceforge.net/ 00:35:28.663 00:35:28.663 00:35:28.663 Suite: bdevio tests on: Nvme1n1 00:35:28.663 Test: blockdev write read block ...passed 00:35:28.663 Test: blockdev write zeroes read block ...passed 00:35:28.663 Test: blockdev write zeroes read no split ...passed 00:35:28.663 Test: blockdev write zeroes read split ...passed 00:35:28.663 Test: blockdev write zeroes read split partial ...passed 00:35:28.663 Test: blockdev reset ...[2024-12-06 18:47:23.424605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:28.663 [2024-12-06 18:47:23.424717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2580 (9): Bad file descriptor 00:35:28.925 [2024-12-06 18:47:23.477597] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:28.925 passed 00:35:28.925 Test: blockdev write read 8 blocks ...passed 00:35:28.925 Test: blockdev write read size > 128k ...passed 00:35:28.925 Test: blockdev write read invalid size ...passed 00:35:28.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:28.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:28.925 Test: blockdev write read max offset ...passed 00:35:28.925 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:28.925 Test: blockdev writev readv 8 blocks ...passed 00:35:28.925 Test: blockdev writev readv 30 x 1block ...passed 00:35:28.925 Test: blockdev writev readv block ...passed 00:35:28.925 Test: blockdev writev readv size > 128k ...passed 00:35:29.188 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:29.188 Test: blockdev comparev and writev ...[2024-12-06 18:47:23.738109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.738546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.738580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.738947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.738975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.738983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.739349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.739375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:29.188 [2024-12-06 18:47:23.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:29.188 passed 00:35:29.188 Test: blockdev nvme passthru rw ...passed 00:35:29.188 Test: blockdev nvme passthru vendor specific ...[2024-12-06 18:47:23.823969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:29.188 [2024-12-06 18:47:23.823985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.824127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:29.188 [2024-12-06 18:47:23.824138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.824280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:29.188 [2024-12-06 18:47:23.824290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:29.188 [2024-12-06 18:47:23.824410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:29.188 [2024-12-06 18:47:23.824420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:29.188 passed 00:35:29.188 Test: blockdev nvme admin passthru ...passed 00:35:29.188 Test: blockdev copy ...passed 00:35:29.188 00:35:29.188 Run Summary: Type Total Ran Passed Failed Inactive 00:35:29.188 suites 1 1 n/a 0 0 00:35:29.188 tests 23 23 23 0 0 00:35:29.188 asserts 152 152 152 0 n/a 00:35:29.188 00:35:29.188 Elapsed time = 1.173 seconds 00:35:29.449 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:29.449 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.449 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:29.450 rmmod nvme_tcp 00:35:29.450 rmmod nvme_fabrics 00:35:29.450 rmmod nvme_keyring 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2415058 ']' 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2415058 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2415058 ']' 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2415058 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415058 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415058' 00:35:29.450 killing process with pid 2415058 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2415058 00:35:29.450 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2415058 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:29.712 18:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.258 00:35:32.258 real 0m12.196s 00:35:32.258 user 0m9.491s 00:35:32.258 sys 0m6.387s 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:32.258 ************************************ 00:35:32.258 END TEST nvmf_bdevio 00:35:32.258 ************************************ 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:32.258 00:35:32.258 real 4m59.820s 00:35:32.258 user 10m23.594s 00:35:32.258 sys 2m4.632s 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.258 18:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:32.258 ************************************ 00:35:32.258 END TEST nvmf_target_core_interrupt_mode 00:35:32.258 ************************************ 00:35:32.258 18:47:26 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:32.258 18:47:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:32.258 18:47:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.258 18:47:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:32.258 ************************************ 00:35:32.258 START TEST nvmf_interrupt 00:35:32.258 ************************************ 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:32.258 * Looking for test storage... 00:35:32.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.258 --rc genhtml_branch_coverage=1 00:35:32.258 --rc genhtml_function_coverage=1 00:35:32.258 --rc genhtml_legend=1 00:35:32.258 --rc geninfo_all_blocks=1 00:35:32.258 --rc geninfo_unexecuted_blocks=1 00:35:32.258 00:35:32.258 ' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.258 --rc genhtml_branch_coverage=1 00:35:32.258 --rc genhtml_function_coverage=1 00:35:32.258 --rc genhtml_legend=1 00:35:32.258 --rc geninfo_all_blocks=1 00:35:32.258 --rc geninfo_unexecuted_blocks=1 00:35:32.258 00:35:32.258 ' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.258 --rc genhtml_branch_coverage=1 00:35:32.258 --rc genhtml_function_coverage=1 00:35:32.258 --rc genhtml_legend=1 00:35:32.258 --rc geninfo_all_blocks=1 00:35:32.258 --rc geninfo_unexecuted_blocks=1 00:35:32.258 00:35:32.258 ' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:32.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.258 --rc genhtml_branch_coverage=1 00:35:32.258 --rc genhtml_function_coverage=1 00:35:32.258 --rc genhtml_legend=1 00:35:32.258 --rc geninfo_all_blocks=1 00:35:32.258 --rc geninfo_unexecuted_blocks=1 00:35:32.258 00:35:32.258 ' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.258 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.259 18:47:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:38.842 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:38.843 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:38.843 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:38.843 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:38.843 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:38.843 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:39.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:39.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:35:39.104 00:35:39.104 --- 10.0.0.2 ping statistics --- 00:35:39.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.104 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:39.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:39.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:35:39.104 00:35:39.104 --- 10.0.0.1 ping statistics --- 00:35:39.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:39.104 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:39.104 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2419669 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2419669 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2419669 ']' 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.381 18:47:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:39.381 [2024-12-06 18:47:33.964689] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:39.381 [2024-12-06 18:47:33.965810] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:35:39.381 [2024-12-06 18:47:33.965860] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.381 [2024-12-06 18:47:34.061743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:39.381 [2024-12-06 18:47:34.099005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.381 [2024-12-06 18:47:34.099037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.381 [2024-12-06 18:47:34.099049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.381 [2024-12-06 18:47:34.099056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.381 [2024-12-06 18:47:34.099062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.381 [2024-12-06 18:47:34.100213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.381 [2024-12-06 18:47:34.100216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.381 [2024-12-06 18:47:34.156646] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:39.381 [2024-12-06 18:47:34.157186] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:39.381 [2024-12-06 18:47:34.157508] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:40.322 5000+0 records in 00:35:40.322 5000+0 records out 00:35:40.322 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0187222 s, 547 MB/s 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 AIO0 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 [2024-12-06 18:47:34.889131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:40.322 [2024-12-06 18:47:34.933456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2419669 0 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 0 idle 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:40.322 18:47:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419669 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419669 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.27 reactor_0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2419669 1 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 1 idle 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419674 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419674 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:40.582 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2420034 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2419669 0 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2419669 0 busy 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:40.583 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419669 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.28 reactor_0' 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419669 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.28 reactor_0 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:40.842 18:47:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:41.782 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:41.782 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:41.782 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:41.782 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419669 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.55 reactor_0' 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419669 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:02.55 reactor_0 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2419669 1 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2419669 1 busy 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:42.042 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419674 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.32 reactor_1' 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419674 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:01.32 reactor_1 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:42.304 18:47:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2420034 00:35:52.334 Initializing NVMe Controllers 00:35:52.334 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:52.334 Controller IO queue size 256, less than required. 00:35:52.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:52.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:52.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:52.334 Initialization complete. Launching workers. 00:35:52.334 ======================================================== 00:35:52.334 Latency(us) 00:35:52.334 Device Information : IOPS MiB/s Average min max 00:35:52.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19498.27 76.17 13134.52 4595.82 31181.25 00:35:52.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20340.97 79.46 12587.20 7525.43 28423.20 00:35:52.334 ======================================================== 00:35:52.334 Total : 39839.24 155.62 12855.07 4595.82 31181.25 00:35:52.334 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2419669 0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 0 idle 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419669 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.27 reactor_0' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419669 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.27 reactor_0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2419669 1 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 1 idle 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419674 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419674 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:52.334 18:47:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:52.334 18:47:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:52.334 18:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:52.334 18:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:52.334 18:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:52.334 18:47:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2419669 0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 0 idle 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419669 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.62 reactor_0' 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419669 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.62 reactor_0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2419669 1 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2419669 1 idle 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2419669 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2419669 -w 256 00:35:54.283 18:47:48 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2419674 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1' 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2419674 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.15 reactor_1 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:54.283 18:47:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:54.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:54.545 rmmod nvme_tcp 00:35:54.545 rmmod nvme_fabrics 00:35:54.545 rmmod nvme_keyring 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2419669 ']' 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2419669 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2419669 ']' 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2419669 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.545 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419669 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419669' 00:35:54.806 killing process with pid 2419669 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2419669 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2419669 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:54.806 18:47:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:57.351 18:47:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:57.351 00:35:57.351 real 0m25.035s 00:35:57.351 user 0m39.739s 00:35:57.351 sys 0m9.981s 00:35:57.351 18:47:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.351 18:47:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:57.351 ************************************ 00:35:57.351 END TEST nvmf_interrupt 00:35:57.351 ************************************ 00:35:57.351 00:35:57.351 real 30m5.012s 00:35:57.351 user 61m31.866s 00:35:57.351 sys 10m20.125s 00:35:57.351 18:47:51 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:57.351 18:47:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:57.351 ************************************ 00:35:57.351 END TEST nvmf_tcp 00:35:57.351 ************************************ 00:35:57.351 18:47:51 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:57.351 18:47:51 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:57.351 18:47:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:57.351 18:47:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:57.351 18:47:51 -- common/autotest_common.sh@10 -- # set +x 00:35:57.351 ************************************ 00:35:57.351 START TEST spdkcli_nvmf_tcp 00:35:57.351 ************************************ 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:57.351 * Looking for test storage... 00:35:57.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.351 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.352 --rc genhtml_branch_coverage=1 00:35:57.352 --rc genhtml_function_coverage=1 00:35:57.352 --rc genhtml_legend=1 00:35:57.352 --rc geninfo_all_blocks=1 00:35:57.352 --rc geninfo_unexecuted_blocks=1 00:35:57.352 00:35:57.352 ' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.352 --rc genhtml_branch_coverage=1 00:35:57.352 --rc genhtml_function_coverage=1 00:35:57.352 --rc genhtml_legend=1 00:35:57.352 --rc geninfo_all_blocks=1 00:35:57.352 --rc geninfo_unexecuted_blocks=1 00:35:57.352 00:35:57.352 ' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.352 --rc genhtml_branch_coverage=1 00:35:57.352 --rc genhtml_function_coverage=1 00:35:57.352 --rc genhtml_legend=1 00:35:57.352 --rc geninfo_all_blocks=1 00:35:57.352 --rc geninfo_unexecuted_blocks=1 00:35:57.352 00:35:57.352 ' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.352 --rc genhtml_branch_coverage=1 00:35:57.352 --rc genhtml_function_coverage=1 00:35:57.352 --rc genhtml_legend=1 00:35:57.352 --rc geninfo_all_blocks=1 00:35:57.352 --rc geninfo_unexecuted_blocks=1 00:35:57.352 00:35:57.352 ' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2423229 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2423229 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2423229 ']' 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.352 18:47:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:57.352 [2024-12-06 18:47:51.985511] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:35:57.352 [2024-12-06 18:47:51.985567] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2423229 ] 00:35:57.352 [2024-12-06 18:47:52.074686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:57.352 [2024-12-06 18:47:52.118537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.352 [2024-12-06 18:47:52.118540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:58.292 18:47:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:58.292 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:58.292 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:58.292 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:58.292 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:58.292 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:58.292 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:58.292 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:58.292 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:58.292 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:58.292 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:58.292 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:58.292 ' 00:36:00.878 [2024-12-06 18:47:55.538855] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.262 [2024-12-06 18:47:56.894990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:04.804 [2024-12-06 18:47:59.413988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:07.346 [2024-12-06 18:48:01.636473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:08.731 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:08.731 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:08.731 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:08.731 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:08.731 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:08.731 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:08.731 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:08.731 18:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:09.304 18:48:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:09.304 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:09.304 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:09.304 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:09.304 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:09.304 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:09.304 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:09.304 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:09.304 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:09.304 ' 00:36:15.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:15.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:15.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:15.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:15.889 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:15.890 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:15.890 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:15.890 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:15.890 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2423229 ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2423229' 00:36:15.890 killing process with pid 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2423229 ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2423229 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2423229 ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2423229 00:36:15.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2423229) - No such process 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2423229 is not found' 00:36:15.890 Process with pid 2423229 is not found 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:15.890 00:36:15.890 real 0m18.114s 00:36:15.890 user 0m40.233s 00:36:15.890 sys 0m0.875s 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.890 18:48:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.890 ************************************ 00:36:15.890 END TEST spdkcli_nvmf_tcp 00:36:15.890 ************************************ 00:36:15.890 18:48:09 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:15.890 18:48:09 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:15.890 18:48:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.890 18:48:09 -- common/autotest_common.sh@10 -- # set +x 00:36:15.890 ************************************ 00:36:15.890 START TEST nvmf_identify_passthru 00:36:15.890 ************************************ 00:36:15.890 18:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:15.890 * Looking for test storage... 00:36:15.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.890 18:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.890 18:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.890 18:48:09 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.890 --rc genhtml_branch_coverage=1 00:36:15.890 --rc genhtml_function_coverage=1 00:36:15.890 --rc genhtml_legend=1 00:36:15.890 --rc geninfo_all_blocks=1 00:36:15.890 --rc geninfo_unexecuted_blocks=1 00:36:15.890 00:36:15.890 ' 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.890 --rc genhtml_branch_coverage=1 00:36:15.890 --rc genhtml_function_coverage=1 00:36:15.890 --rc genhtml_legend=1 00:36:15.890 --rc geninfo_all_blocks=1 00:36:15.890 --rc geninfo_unexecuted_blocks=1 00:36:15.890 00:36:15.890 ' 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.890 --rc genhtml_branch_coverage=1 00:36:15.890 --rc genhtml_function_coverage=1 00:36:15.890 --rc genhtml_legend=1 00:36:15.890 --rc geninfo_all_blocks=1 00:36:15.890 --rc geninfo_unexecuted_blocks=1 00:36:15.890 00:36:15.890 ' 00:36:15.890 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.890 --rc genhtml_branch_coverage=1 00:36:15.890 --rc genhtml_function_coverage=1 00:36:15.890 --rc genhtml_legend=1 00:36:15.890 --rc geninfo_all_blocks=1 00:36:15.890 --rc geninfo_unexecuted_blocks=1 00:36:15.890 00:36:15.890 ' 00:36:15.890 18:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.890 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.890 18:48:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.891 18:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.891 18:48:10 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:15.891 18:48:10 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.891 18:48:10 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.891 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:15.891 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.891 18:48:10 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.891 18:48:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:24.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:24.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:24.035 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:24.035 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:24.035 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:24.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:24.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:36:24.036 00:36:24.036 --- 10.0.0.2 ping statistics --- 00:36:24.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.036 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:24.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:24.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:36:24.036 00:36:24.036 --- 10.0.0.1 ping statistics --- 00:36:24.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:24.036 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:24.036 18:48:17 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:24.036 18:48:17 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:24.036 18:48:17 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:24.036 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:24.036 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.036 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.297 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.297 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2431207 00:36:24.297 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:24.297 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:24.297 18:48:18 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2431207 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2431207 ']' 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:24.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.297 18:48:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:24.297 [2024-12-06 18:48:18.901007] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:36:24.297 [2024-12-06 18:48:18.901078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.297 [2024-12-06 18:48:19.002174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:24.297 [2024-12-06 18:48:19.056271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.297 [2024-12-06 18:48:19.056333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.297 [2024-12-06 18:48:19.056342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.297 [2024-12-06 18:48:19.056349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.297 [2024-12-06 18:48:19.056355] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.297 [2024-12-06 18:48:19.058388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.297 [2024-12-06 18:48:19.058548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.297 [2024-12-06 18:48:19.058710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:24.297 [2024-12-06 18:48:19.058742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:25.239 18:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.239 INFO: Log level set to 20 00:36:25.239 INFO: Requests: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "method": "nvmf_set_config", 00:36:25.239 "id": 1, 00:36:25.239 "params": { 00:36:25.239 "admin_cmd_passthru": { 00:36:25.239 "identify_ctrlr": true 00:36:25.239 } 00:36:25.239 } 00:36:25.239 } 00:36:25.239 00:36:25.239 INFO: response: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "id": 1, 00:36:25.239 "result": true 00:36:25.239 } 00:36:25.239 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.239 18:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.239 INFO: Setting log level to 20 00:36:25.239 INFO: Setting log level to 20 00:36:25.239 INFO: Log level set to 20 00:36:25.239 INFO: Log level set to 20 00:36:25.239 INFO: Requests: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "method": "framework_start_init", 00:36:25.239 "id": 1 00:36:25.239 } 00:36:25.239 00:36:25.239 INFO: Requests: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "method": "framework_start_init", 00:36:25.239 "id": 1 00:36:25.239 } 00:36:25.239 00:36:25.239 [2024-12-06 18:48:19.826633] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:25.239 INFO: response: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "id": 1, 00:36:25.239 "result": true 00:36:25.239 } 00:36:25.239 00:36:25.239 INFO: response: 00:36:25.239 { 00:36:25.239 "jsonrpc": "2.0", 00:36:25.239 "id": 1, 00:36:25.239 "result": true 00:36:25.239 } 00:36:25.239 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.239 18:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.239 INFO: Setting log level to 40 00:36:25.239 INFO: Setting log level to 40 00:36:25.239 INFO: Setting log level to 40 00:36:25.239 [2024-12-06 18:48:19.840223] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.239 18:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.239 18:48:19 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.239 18:48:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.502 Nvme0n1 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.502 [2024-12-06 18:48:20.242789] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:25.502 [ 00:36:25.502 { 00:36:25.502 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:25.502 "subtype": "Discovery", 00:36:25.502 "listen_addresses": [], 00:36:25.502 "allow_any_host": true, 00:36:25.502 "hosts": [] 00:36:25.502 }, 00:36:25.502 { 00:36:25.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.502 "subtype": "NVMe", 00:36:25.502 "listen_addresses": [ 00:36:25.502 { 00:36:25.502 "trtype": "TCP", 00:36:25.502 "adrfam": "IPv4", 00:36:25.502 "traddr": "10.0.0.2", 00:36:25.502 "trsvcid": "4420" 00:36:25.502 } 00:36:25.502 ], 00:36:25.502 "allow_any_host": true, 00:36:25.502 "hosts": [], 00:36:25.502 "serial_number": "SPDK00000000000001", 00:36:25.502 "model_number": "SPDK bdev Controller", 00:36:25.502 "max_namespaces": 1, 00:36:25.502 "min_cntlid": 1, 00:36:25.502 "max_cntlid": 65519, 00:36:25.502 "namespaces": [ 00:36:25.502 { 00:36:25.502 "nsid": 1, 00:36:25.502 "bdev_name": "Nvme0n1", 00:36:25.502 "name": "Nvme0n1", 00:36:25.502 "nguid": "36344730526054870025384500000044", 00:36:25.502 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:25.502 } 00:36:25.502 ] 00:36:25.502 } 00:36:25.502 ] 00:36:25.502 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:25.502 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:25.764 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:25.764 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:25.764 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:25.764 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:26.026 18:48:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:26.026 rmmod nvme_tcp 00:36:26.026 rmmod nvme_fabrics 00:36:26.026 rmmod nvme_keyring 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2431207 ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2431207 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2431207 ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2431207 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431207 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431207' 00:36:26.026 killing process with pid 2431207 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2431207 00:36:26.026 18:48:20 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2431207 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:26.599 18:48:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.599 18:48:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:26.599 18:48:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.515 18:48:23 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:28.515 00:36:28.515 real 0m13.277s 00:36:28.515 user 0m10.349s 00:36:28.515 sys 0m6.789s 00:36:28.515 18:48:23 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.515 18:48:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.515 ************************************ 00:36:28.515 END TEST nvmf_identify_passthru 00:36:28.515 ************************************ 00:36:28.515 18:48:23 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:28.515 18:48:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:28.515 18:48:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:28.515 18:48:23 -- common/autotest_common.sh@10 -- # set +x 00:36:28.515 ************************************ 00:36:28.515 START TEST nvmf_dif 00:36:28.515 ************************************ 00:36:28.515 18:48:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:28.777 * Looking for test storage... 00:36:28.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.777 --rc genhtml_branch_coverage=1 00:36:28.777 --rc genhtml_function_coverage=1 00:36:28.777 --rc genhtml_legend=1 00:36:28.777 --rc geninfo_all_blocks=1 00:36:28.777 --rc geninfo_unexecuted_blocks=1 00:36:28.777 00:36:28.777 ' 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.777 --rc genhtml_branch_coverage=1 00:36:28.777 --rc genhtml_function_coverage=1 00:36:28.777 --rc genhtml_legend=1 00:36:28.777 --rc geninfo_all_blocks=1 00:36:28.777 --rc geninfo_unexecuted_blocks=1 00:36:28.777 00:36:28.777 ' 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.777 --rc genhtml_branch_coverage=1 00:36:28.777 --rc genhtml_function_coverage=1 00:36:28.777 --rc genhtml_legend=1 00:36:28.777 --rc geninfo_all_blocks=1 00:36:28.777 --rc geninfo_unexecuted_blocks=1 00:36:28.777 00:36:28.777 ' 00:36:28.777 18:48:23 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:28.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:28.777 --rc genhtml_branch_coverage=1 00:36:28.777 --rc genhtml_function_coverage=1 00:36:28.777 --rc genhtml_legend=1 00:36:28.777 --rc geninfo_all_blocks=1 00:36:28.777 --rc geninfo_unexecuted_blocks=1 00:36:28.777 00:36:28.777 ' 00:36:28.777 18:48:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:28.777 18:48:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:28.777 18:48:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:28.777 18:48:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.777 18:48:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.777 18:48:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.777 18:48:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:28.778 18:48:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:28.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:28.778 18:48:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:28.778 18:48:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:28.778 18:48:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:28.778 18:48:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:28.778 18:48:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.778 18:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:28.778 18:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:28.778 18:48:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:28.778 18:48:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:36.920 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:36.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:36.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:36.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:36.920 18:48:30 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:36.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:36:36.921 00:36:36.921 --- 10.0.0.2 ping statistics --- 00:36:36.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.921 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:36.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:36:36.921 00:36:36.921 --- 10.0.0.1 ping statistics --- 00:36:36.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.921 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:36.921 18:48:30 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:39.468 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:39.468 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:39.468 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.730 18:48:34 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.992 18:48:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:39.992 18:48:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:39.992 18:48:34 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.992 18:48:34 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2437127 00:36:39.992 18:48:34 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2437127 00:36:39.992 18:48:34 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2437127 ']' 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.992 18:48:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.992 [2024-12-06 18:48:34.627932] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:36:39.992 [2024-12-06 18:48:34.627987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.992 [2024-12-06 18:48:34.722970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.992 [2024-12-06 18:48:34.758106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:39.992 [2024-12-06 18:48:34.758139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:39.992 [2024-12-06 18:48:34.758152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:39.992 [2024-12-06 18:48:34.758159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:39.992 [2024-12-06 18:48:34.758165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:39.992 [2024-12-06 18:48:34.758728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:40.936 18:48:35 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 18:48:35 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.936 18:48:35 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:40.936 18:48:35 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 [2024-12-06 18:48:35.464515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.936 18:48:35 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 ************************************ 00:36:40.936 START TEST fio_dif_1_default 00:36:40.936 ************************************ 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 bdev_null0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.936 [2024-12-06 18:48:35.548863] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.936 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.936 { 00:36:40.936 "params": { 00:36:40.936 "name": "Nvme$subsystem", 00:36:40.936 "trtype": "$TEST_TRANSPORT", 00:36:40.936 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.936 "adrfam": "ipv4", 00:36:40.936 "trsvcid": "$NVMF_PORT", 00:36:40.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.937 "hdgst": ${hdgst:-false}, 00:36:40.937 "ddgst": ${ddgst:-false} 00:36:40.937 }, 00:36:40.937 "method": "bdev_nvme_attach_controller" 00:36:40.937 } 00:36:40.937 EOF 00:36:40.937 )") 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.937 "params": { 00:36:40.937 "name": "Nvme0", 00:36:40.937 "trtype": "tcp", 00:36:40.937 "traddr": "10.0.0.2", 00:36:40.937 "adrfam": "ipv4", 00:36:40.937 "trsvcid": "4420", 00:36:40.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.937 "hdgst": false, 00:36:40.937 "ddgst": false 00:36:40.937 }, 00:36:40.937 "method": "bdev_nvme_attach_controller" 00:36:40.937 }' 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.937 18:48:35 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:41.507 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:41.508 fio-3.35 00:36:41.508 Starting 1 thread 00:36:53.731 00:36:53.731 filename0: (groupid=0, jobs=1): err= 0: pid=2437688: Fri Dec 6 18:48:46 2024 00:36:53.731 read: IOPS=220, BW=881KiB/s (902kB/s)(8816KiB/10006msec) 00:36:53.731 slat (nsec): min=5494, max=65358, avg=6704.29, stdev=2497.09 00:36:53.731 clat (usec): min=606, max=43083, avg=18141.23, stdev=19938.07 00:36:53.731 lat (usec): min=614, max=43108, avg=18147.93, stdev=19937.74 00:36:53.731 clat percentiles (usec): 00:36:53.731 | 1.00th=[ 685], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 832], 00:36:53.731 | 30.00th=[ 848], 40.00th=[ 922], 50.00th=[ 1012], 60.00th=[41157], 00:36:53.731 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:53.731 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:36:53.731 | 99.99th=[43254] 00:36:53.731 bw ( KiB/s): min= 704, max= 3008, per=99.88%, avg=880.00, stdev=501.74, samples=20 00:36:53.731 iops : min= 176, max= 752, avg=220.00, stdev=125.43, samples=20 00:36:53.731 lat (usec) : 750=4.54%, 1000=43.06% 00:36:53.731 lat (msec) : 2=9.39%, 4=0.18%, 50=42.83% 00:36:53.731 cpu : usr=93.58%, sys=6.18%, ctx=13, majf=0, minf=255 00:36:53.731 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.731 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.731 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.731 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:53.731 00:36:53.731 Run status group 0 (all jobs): 00:36:53.731 READ: bw=881KiB/s (902kB/s), 881KiB/s-881KiB/s (902kB/s-902kB/s), io=8816KiB (9028kB), run=10006-10006msec 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 00:36:53.731 real 0m11.250s 00:36:53.731 user 0m22.591s 00:36:53.731 sys 0m0.998s 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 ************************************ 00:36:53.731 END TEST fio_dif_1_default 00:36:53.731 ************************************ 00:36:53.731 18:48:46 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:53.731 18:48:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:53.731 18:48:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 ************************************ 00:36:53.731 START TEST fio_dif_1_multi_subsystems 00:36:53.731 ************************************ 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 bdev_null0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 [2024-12-06 18:48:46.878419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.731 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.731 bdev_null1 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.732 { 00:36:53.732 "params": { 00:36:53.732 "name": "Nvme$subsystem", 00:36:53.732 "trtype": "$TEST_TRANSPORT", 00:36:53.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.732 "adrfam": "ipv4", 00:36:53.732 "trsvcid": "$NVMF_PORT", 00:36:53.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.732 "hdgst": ${hdgst:-false}, 00:36:53.732 "ddgst": ${ddgst:-false} 00:36:53.732 }, 00:36:53.732 "method": "bdev_nvme_attach_controller" 00:36:53.732 } 00:36:53.732 EOF 00:36:53.732 )") 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:53.732 { 00:36:53.732 "params": { 00:36:53.732 "name": "Nvme$subsystem", 00:36:53.732 "trtype": "$TEST_TRANSPORT", 00:36:53.732 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.732 "adrfam": "ipv4", 00:36:53.732 "trsvcid": "$NVMF_PORT", 00:36:53.732 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.732 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.732 "hdgst": ${hdgst:-false}, 00:36:53.732 "ddgst": ${ddgst:-false} 00:36:53.732 }, 00:36:53.732 "method": "bdev_nvme_attach_controller" 00:36:53.732 } 00:36:53.732 EOF 00:36:53.732 )") 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:53.732 "params": { 00:36:53.732 "name": "Nvme0", 00:36:53.732 "trtype": "tcp", 00:36:53.732 "traddr": "10.0.0.2", 00:36:53.732 "adrfam": "ipv4", 00:36:53.732 "trsvcid": "4420", 00:36:53.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.732 "hdgst": false, 00:36:53.732 "ddgst": false 00:36:53.732 }, 00:36:53.732 "method": "bdev_nvme_attach_controller" 00:36:53.732 },{ 00:36:53.732 "params": { 00:36:53.732 "name": "Nvme1", 00:36:53.732 "trtype": "tcp", 00:36:53.732 "traddr": "10.0.0.2", 00:36:53.732 "adrfam": "ipv4", 00:36:53.732 "trsvcid": "4420", 00:36:53.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.732 "hdgst": false, 00:36:53.732 "ddgst": false 00:36:53.732 }, 00:36:53.732 "method": "bdev_nvme_attach_controller" 00:36:53.732 }' 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:53.732 18:48:46 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.732 18:48:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.732 18:48:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.732 18:48:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:53.732 18:48:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.732 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:53.732 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:53.732 fio-3.35 00:36:53.732 Starting 2 threads 00:37:03.740 00:37:03.740 filename0: (groupid=0, jobs=1): err= 0: pid=2440114: Fri Dec 6 18:48:58 2024 00:37:03.740 read: IOPS=97, BW=391KiB/s (401kB/s)(3920KiB/10014msec) 00:37:03.740 slat (nsec): min=5500, max=29709, avg=6576.93, stdev=1678.33 00:37:03.740 clat (usec): min=580, max=42898, avg=40854.55, stdev=2587.39 00:37:03.740 lat (usec): min=588, max=42928, avg=40861.13, stdev=2587.31 00:37:03.740 clat percentiles (usec): 00:37:03.740 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:37:03.740 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:03.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:03.740 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:03.740 | 99.99th=[42730] 00:37:03.740 bw ( KiB/s): min= 384, max= 416, per=37.21%, avg=390.40, stdev=13.13, samples=20 00:37:03.740 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:37:03.740 lat (usec) : 750=0.41% 00:37:03.740 lat (msec) : 50=99.59% 00:37:03.740 cpu : usr=95.21%, sys=4.56%, ctx=16, majf=0, minf=178 00:37:03.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.740 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:03.740 filename1: (groupid=0, jobs=1): err= 0: pid=2440115: Fri Dec 6 18:48:58 2024 00:37:03.740 read: IOPS=164, BW=657KiB/s (673kB/s)(6576KiB/10013msec) 00:37:03.740 slat (nsec): min=5497, max=35851, avg=6847.20, stdev=1719.66 00:37:03.740 clat (usec): min=592, max=41999, avg=24343.10, stdev=19849.58 00:37:03.740 lat (usec): min=597, max=42028, avg=24349.95, stdev=19849.16 00:37:03.740 clat percentiles (usec): 00:37:03.740 | 1.00th=[ 627], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 840], 00:37:03.740 | 30.00th=[ 865], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:37:03.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:03.740 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:03.740 | 99.99th=[42206] 00:37:03.740 bw ( KiB/s): min= 384, max= 1152, per=62.59%, avg=656.00, stdev=283.47, samples=20 00:37:03.740 iops : min= 96, max= 288, avg=164.00, stdev=70.87, samples=20 00:37:03.740 lat (usec) : 750=3.59%, 1000=37.29% 00:37:03.740 lat (msec) : 2=0.73%, 50=58.39% 00:37:03.740 cpu : usr=95.13%, sys=4.64%, ctx=15, majf=0, minf=109 00:37:03.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.740 issued rwts: total=1644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:03.740 00:37:03.740 Run status group 0 (all jobs): 00:37:03.740 READ: bw=1048KiB/s (1073kB/s), 391KiB/s-657KiB/s (401kB/s-673kB/s), io=10.2MiB (10.7MB), run=10013-10014msec 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.740 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 00:37:03.741 real 0m11.378s 00:37:03.741 user 0m35.795s 00:37:03.741 sys 0m1.356s 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 ************************************ 00:37:03.741 END TEST fio_dif_1_multi_subsystems 00:37:03.741 ************************************ 00:37:03.741 18:48:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:03.741 18:48:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:03.741 18:48:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 ************************************ 00:37:03.741 START TEST fio_dif_rand_params 00:37:03.741 ************************************ 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 bdev_null0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:03.741 [2024-12-06 18:48:58.340252] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:03.741 { 00:37:03.741 "params": { 00:37:03.741 "name": "Nvme$subsystem", 00:37:03.741 "trtype": "$TEST_TRANSPORT", 00:37:03.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:03.741 "adrfam": "ipv4", 00:37:03.741 "trsvcid": "$NVMF_PORT", 00:37:03.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:03.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:03.741 "hdgst": ${hdgst:-false}, 00:37:03.741 "ddgst": ${ddgst:-false} 00:37:03.741 }, 00:37:03.741 "method": "bdev_nvme_attach_controller" 00:37:03.741 } 00:37:03.741 EOF 00:37:03.741 )") 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:03.741 "params": { 00:37:03.741 "name": "Nvme0", 00:37:03.741 "trtype": "tcp", 00:37:03.741 "traddr": "10.0.0.2", 00:37:03.741 "adrfam": "ipv4", 00:37:03.741 "trsvcid": "4420", 00:37:03.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:03.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:03.741 "hdgst": false, 00:37:03.741 "ddgst": false 00:37:03.741 }, 00:37:03.741 "method": "bdev_nvme_attach_controller" 00:37:03.741 }' 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:03.741 18:48:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:04.001 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:04.001 ... 00:37:04.001 fio-3.35 00:37:04.001 Starting 3 threads 00:37:10.584 00:37:10.584 filename0: (groupid=0, jobs=1): err= 0: pid=2442313: Fri Dec 6 18:49:04 2024 00:37:10.584 read: IOPS=359, BW=45.0MiB/s (47.1MB/s)(225MiB/5004msec) 00:37:10.584 slat (nsec): min=8058, max=32703, avg=8824.04, stdev=1017.47 00:37:10.584 clat (usec): min=3655, max=91413, avg=8327.25, stdev=5292.98 00:37:10.584 lat (usec): min=3663, max=91422, avg=8336.08, stdev=5293.12 00:37:10.584 clat percentiles (usec): 00:37:10.584 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6390], 00:37:10.584 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8094], 00:37:10.584 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10552], 00:37:10.584 | 99.00th=[46924], 99.50th=[48497], 99.90th=[88605], 99.95th=[91751], 00:37:10.584 | 99.99th=[91751] 00:37:10.584 bw ( KiB/s): min=39680, max=51456, per=43.22%, avg=45937.78, stdev=3401.31, samples=9 00:37:10.584 iops : min= 310, max= 402, avg=358.89, stdev=26.57, samples=9 00:37:10.584 lat (msec) : 4=0.11%, 10=90.83%, 20=7.83%, 50=0.89%, 100=0.33% 00:37:10.584 cpu : usr=94.18%, sys=5.58%, ctx=9, majf=0, minf=90 00:37:10.584 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.584 filename0: (groupid=0, jobs=1): err= 0: pid=2442314: Fri Dec 6 18:49:04 2024 00:37:10.584 read: IOPS=327, BW=40.9MiB/s (42.9MB/s)(206MiB/5045msec) 00:37:10.584 slat (nsec): min=5506, max=33109, avg=8263.05, stdev=1724.34 00:37:10.584 clat (usec): min=4809, max=89232, avg=9135.53, stdev=6090.30 00:37:10.584 lat (usec): min=4818, max=89240, avg=9143.79, stdev=6090.40 00:37:10.584 clat percentiles (usec): 00:37:10.584 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:37:10.584 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8586], 00:37:10.584 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[10945], 00:37:10.584 | 99.00th=[47449], 99.50th=[48497], 99.90th=[88605], 99.95th=[89654], 00:37:10.584 | 99.99th=[89654] 00:37:10.584 bw ( KiB/s): min=29440, max=48128, per=39.69%, avg=42188.80, stdev=6276.27, samples=10 00:37:10.584 iops : min= 230, max= 376, avg=329.60, stdev=49.03, samples=10 00:37:10.584 lat (msec) : 10=84.67%, 20=13.58%, 50=1.45%, 100=0.30% 00:37:10.584 cpu : usr=94.25%, sys=5.41%, ctx=47, majf=0, minf=87 00:37:10.584 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.584 filename0: (groupid=0, jobs=1): err= 0: pid=2442315: Fri Dec 6 18:49:04 2024 00:37:10.584 read: IOPS=147, BW=18.4MiB/s (19.3MB/s)(92.4MiB/5010msec) 00:37:10.584 slat (nsec): min=5642, max=32914, avg=8800.62, stdev=1613.44 00:37:10.584 clat (msec): min=5, max=131, avg=20.32, stdev=22.94 00:37:10.584 lat (msec): min=5, max=131, avg=20.33, stdev=22.94 00:37:10.584 clat percentiles (msec): 00:37:10.584 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:37:10.584 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:37:10.584 | 70.00th=[ 11], 80.00th=[ 49], 90.00th=[ 51], 95.00th=[ 53], 00:37:10.584 | 99.00th=[ 92], 99.50th=[ 93], 99.90th=[ 132], 99.95th=[ 132], 00:37:10.584 | 99.99th=[ 132] 00:37:10.584 bw ( KiB/s): min=11264, max=32256, per=17.73%, avg=18841.60, stdev=7393.24, samples=10 00:37:10.584 iops : min= 88, max= 252, avg=147.20, stdev=57.76, samples=10 00:37:10.584 lat (msec) : 10=68.74%, 20=8.12%, 50=11.77%, 100=10.96%, 250=0.41% 00:37:10.584 cpu : usr=96.01%, sys=3.75%, ctx=8, majf=0, minf=105 00:37:10.584 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:10.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:10.584 issued rwts: total=739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:10.584 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:10.584 00:37:10.584 Run status group 0 (all jobs): 00:37:10.584 READ: bw=104MiB/s (109MB/s), 18.4MiB/s-45.0MiB/s (19.3MB/s-47.1MB/s), io=524MiB (549MB), run=5004-5045msec 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 bdev_null0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 [2024-12-06 18:49:04.509721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 bdev_null1 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:10.584 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.585 bdev_null2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.585 { 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme$subsystem", 00:37:10.585 "trtype": "$TEST_TRANSPORT", 00:37:10.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "$NVMF_PORT", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.585 "hdgst": ${hdgst:-false}, 00:37:10.585 "ddgst": ${ddgst:-false} 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 } 00:37:10.585 EOF 00:37:10.585 )") 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.585 { 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme$subsystem", 00:37:10.585 "trtype": "$TEST_TRANSPORT", 00:37:10.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "$NVMF_PORT", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.585 "hdgst": ${hdgst:-false}, 00:37:10.585 "ddgst": ${ddgst:-false} 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 } 00:37:10.585 EOF 00:37:10.585 )") 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:10.585 { 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme$subsystem", 00:37:10.585 "trtype": "$TEST_TRANSPORT", 00:37:10.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "$NVMF_PORT", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:10.585 "hdgst": ${hdgst:-false}, 00:37:10.585 "ddgst": ${ddgst:-false} 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 } 00:37:10.585 EOF 00:37:10.585 )") 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme0", 00:37:10.585 "trtype": "tcp", 00:37:10.585 "traddr": "10.0.0.2", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "4420", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:10.585 "hdgst": false, 00:37:10.585 "ddgst": false 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 },{ 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme1", 00:37:10.585 "trtype": "tcp", 00:37:10.585 "traddr": "10.0.0.2", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "4420", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:10.585 "hdgst": false, 00:37:10.585 "ddgst": false 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 },{ 00:37:10.585 "params": { 00:37:10.585 "name": "Nvme2", 00:37:10.585 "trtype": "tcp", 00:37:10.585 "traddr": "10.0.0.2", 00:37:10.585 "adrfam": "ipv4", 00:37:10.585 "trsvcid": "4420", 00:37:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:10.585 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:10.585 "hdgst": false, 00:37:10.585 "ddgst": false 00:37:10.585 }, 00:37:10.585 "method": "bdev_nvme_attach_controller" 00:37:10.585 }' 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:10.585 18:49:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:10.585 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.585 ... 00:37:10.585 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.585 ... 00:37:10.585 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:10.585 ... 00:37:10.585 fio-3.35 00:37:10.585 Starting 24 threads 00:37:22.975 00:37:22.975 filename0: (groupid=0, jobs=1): err= 0: pid=2443817: Fri Dec 6 18:49:15 2024 00:37:22.975 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:37:22.975 slat (nsec): min=5685, max=89727, avg=21918.54, stdev=13646.01 00:37:22.975 clat (usec): min=5485, max=61184, avg=23914.74, stdev=1842.63 00:37:22.975 lat (usec): min=5491, max=61201, avg=23936.66, stdev=1842.54 00:37:22.975 clat percentiles (usec): 00:37:22.975 | 1.00th=[15926], 5.00th=[23200], 10.00th=[23200], 20.00th=[23462], 00:37:22.975 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.975 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.975 | 99.00th=[25822], 99.50th=[31851], 99.90th=[43254], 99.95th=[43254], 00:37:22.975 | 99.99th=[61080] 00:37:22.975 bw ( KiB/s): min= 2436, max= 2688, per=4.11%, avg=2645.26, stdev=73.67, samples=19 00:37:22.975 iops : min= 609, max= 672, avg=661.11, stdev=18.41, samples=19 00:37:22.975 lat (msec) : 10=0.27%, 20=1.04%, 50=98.66%, 100=0.03% 00:37:22.975 cpu : usr=98.73%, sys=0.95%, ctx=48, majf=0, minf=28 00:37:22.975 IO depths : 1=5.8%, 2=12.0%, 4=24.8%, 8=50.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:22.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.975 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.975 filename0: (groupid=0, jobs=1): err= 0: pid=2443818: Fri Dec 6 18:49:15 2024 00:37:22.975 read: IOPS=683, BW=2735KiB/s (2801kB/s)(26.7MiB/10010msec) 00:37:22.975 slat (nsec): min=5675, max=94723, avg=20878.22, stdev=15694.21 00:37:22.975 clat (usec): min=2053, max=37595, avg=23227.27, stdev=3844.64 00:37:22.975 lat (usec): min=2071, max=37617, avg=23248.15, stdev=3845.51 00:37:22.975 clat percentiles (usec): 00:37:22.975 | 1.00th=[ 2868], 5.00th=[16057], 10.00th=[22938], 20.00th=[23462], 00:37:22.975 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.975 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:22.975 | 99.00th=[32375], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:37:22.975 | 99.99th=[37487] 00:37:22.975 bw ( KiB/s): min= 2560, max= 3888, per=4.24%, avg=2731.89, stdev=287.50, samples=19 00:37:22.975 iops : min= 640, max= 972, avg=682.84, stdev=71.90, samples=19 00:37:22.975 lat (msec) : 4=1.40%, 10=1.29%, 20=5.23%, 50=92.08% 00:37:22.975 cpu : usr=98.35%, sys=1.03%, ctx=208, majf=0, minf=39 00:37:22.975 IO depths : 1=5.2%, 2=11.0%, 4=23.3%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:22.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.975 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.975 filename0: (groupid=0, jobs=1): err= 0: pid=2443819: Fri Dec 6 18:49:15 2024 00:37:22.975 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:37:22.975 slat (usec): min=5, max=105, avg=20.82, stdev=15.00 00:37:22.975 clat (usec): min=11793, max=39957, avg=23868.48, stdev=1531.09 00:37:22.975 lat (usec): min=11799, max=39984, avg=23889.30, stdev=1530.67 00:37:22.975 clat percentiles (usec): 00:37:22.975 | 1.00th=[16450], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.975 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.975 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.975 | 99.00th=[26084], 99.50th=[30540], 99.90th=[39060], 99.95th=[39060], 00:37:22.975 | 99.99th=[40109] 00:37:22.975 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2652.42, stdev=55.70, samples=19 00:37:22.975 iops : min= 640, max= 672, avg=662.95, stdev=13.89, samples=19 00:37:22.975 lat (msec) : 20=1.83%, 50=98.17% 00:37:22.975 cpu : usr=99.03%, sys=0.69%, ctx=27, majf=0, minf=30 00:37:22.975 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:22.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.975 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.975 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.975 filename0: (groupid=0, jobs=1): err= 0: pid=2443820: Fri Dec 6 18:49:15 2024 00:37:22.975 read: IOPS=685, BW=2740KiB/s (2806kB/s)(26.8MiB/10007msec) 00:37:22.975 slat (nsec): min=5675, max=98819, avg=20731.88, stdev=15556.17 00:37:22.975 clat (usec): min=8026, max=41206, avg=23190.67, stdev=3766.11 00:37:22.975 lat (usec): min=8039, max=41214, avg=23211.40, stdev=3768.72 00:37:22.975 clat percentiles (usec): 00:37:22.975 | 1.00th=[12518], 5.00th=[15664], 10.00th=[17171], 20.00th=[22938], 00:37:22.975 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.975 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[27395], 00:37:22.975 | 99.00th=[37487], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:37:22.975 | 99.99th=[41157] 00:37:22.975 bw ( KiB/s): min= 2560, max= 3161, per=4.23%, avg=2724.21, stdev=141.67, samples=19 00:37:22.976 iops : min= 640, max= 790, avg=680.89, stdev=35.42, samples=19 00:37:22.976 lat (msec) : 10=0.53%, 20=14.11%, 50=85.37% 00:37:22.976 cpu : usr=98.79%, sys=0.92%, ctx=59, majf=0, minf=45 00:37:22.976 IO depths : 1=4.0%, 2=8.2%, 4=18.6%, 8=60.4%, 16=8.8%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6855,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename0: (groupid=0, jobs=1): err= 0: pid=2443821: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=669, BW=2680KiB/s (2744kB/s)(26.2MiB/10016msec) 00:37:22.976 slat (nsec): min=5693, max=95164, avg=18101.02, stdev=14486.48 00:37:22.976 clat (usec): min=8355, max=41855, avg=23735.03, stdev=2093.40 00:37:22.976 lat (usec): min=8368, max=41864, avg=23753.13, stdev=2092.82 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[11994], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.976 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.976 | 99.00th=[25560], 99.50th=[29754], 99.90th=[39584], 99.95th=[41681], 00:37:22.976 | 99.99th=[41681] 00:37:22.976 bw ( KiB/s): min= 2560, max= 2944, per=4.16%, avg=2675.50, stdev=80.71, samples=20 00:37:22.976 iops : min= 640, max= 736, avg=668.70, stdev=20.19, samples=20 00:37:22.976 lat (msec) : 10=0.55%, 20=2.67%, 50=96.78% 00:37:22.976 cpu : usr=98.87%, sys=0.83%, ctx=55, majf=0, minf=22 00:37:22.976 IO depths : 1=5.9%, 2=12.0%, 4=24.4%, 8=51.1%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename0: (groupid=0, jobs=1): err= 0: pid=2443822: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:37:22.976 slat (nsec): min=5703, max=89085, avg=21368.71, stdev=13275.89 00:37:22.976 clat (usec): min=10522, max=40293, avg=23919.45, stdev=1386.97 00:37:22.976 lat (usec): min=10551, max=40312, avg=23940.81, stdev=1386.47 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[21890], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.976 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.976 | 99.00th=[25822], 99.50th=[26084], 99.90th=[40109], 99.95th=[40109], 00:37:22.976 | 99.99th=[40109] 00:37:22.976 bw ( KiB/s): min= 2554, max= 2688, per=4.11%, avg=2645.32, stdev=59.75, samples=19 00:37:22.976 iops : min= 638, max= 672, avg=661.11, stdev=14.88, samples=19 00:37:22.976 lat (msec) : 20=0.96%, 50=99.04% 00:37:22.976 cpu : usr=98.84%, sys=0.89%, ctx=14, majf=0, minf=23 00:37:22.976 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename0: (groupid=0, jobs=1): err= 0: pid=2443823: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=671, BW=2687KiB/s (2752kB/s)(26.3MiB/10005msec) 00:37:22.976 slat (nsec): min=5497, max=84090, avg=18533.49, stdev=13274.15 00:37:22.976 clat (usec): min=11193, max=40983, avg=23669.46, stdev=2520.62 00:37:22.976 lat (usec): min=11199, max=41005, avg=23687.99, stdev=2522.00 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[14746], 5.00th=[17695], 10.00th=[22938], 20.00th=[23462], 00:37:22.976 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:22.976 | 99.00th=[34341], 99.50th=[36439], 99.90th=[39060], 99.95th=[39584], 00:37:22.976 | 99.99th=[41157] 00:37:22.976 bw ( KiB/s): min= 2554, max= 2992, per=4.17%, avg=2686.32, stdev=97.86, samples=19 00:37:22.976 iops : min= 638, max= 748, avg=671.37, stdev=24.49, samples=19 00:37:22.976 lat (msec) : 20=6.75%, 50=93.25% 00:37:22.976 cpu : usr=98.84%, sys=0.89%, ctx=14, majf=0, minf=22 00:37:22.976 IO depths : 1=4.1%, 2=9.7%, 4=23.4%, 8=54.5%, 16=8.4%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename0: (groupid=0, jobs=1): err= 0: pid=2443824: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=691, BW=2765KiB/s (2832kB/s)(27.1MiB/10021msec) 00:37:22.976 slat (nsec): min=5679, max=88628, avg=8928.20, stdev=4866.51 00:37:22.976 clat (usec): min=2097, max=36020, avg=23052.69, stdev=3817.66 00:37:22.976 lat (usec): min=2113, max=36028, avg=23061.62, stdev=3816.69 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[ 2900], 5.00th=[15139], 10.00th=[18220], 20.00th=[23462], 00:37:22.976 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:22.976 | 99.00th=[32113], 99.50th=[34341], 99.90th=[35390], 99.95th=[35914], 00:37:22.976 | 99.99th=[35914] 00:37:22.976 bw ( KiB/s): min= 2560, max= 3704, per=4.29%, avg=2763.20, stdev=238.11, samples=20 00:37:22.976 iops : min= 640, max= 926, avg=690.70, stdev=59.54, samples=20 00:37:22.976 lat (msec) : 4=1.15%, 10=0.92%, 20=9.02%, 50=88.90% 00:37:22.976 cpu : usr=98.75%, sys=0.97%, ctx=22, majf=0, minf=50 00:37:22.976 IO depths : 1=4.8%, 2=11.0%, 4=24.8%, 8=51.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename1: (groupid=0, jobs=1): err= 0: pid=2443825: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=681, BW=2728KiB/s (2793kB/s)(26.7MiB/10006msec) 00:37:22.976 slat (usec): min=5, max=107, avg=20.74, stdev=16.08 00:37:22.976 clat (usec): min=7859, max=51430, avg=23297.68, stdev=4329.60 00:37:22.976 lat (usec): min=7871, max=51445, avg=23318.42, stdev=4332.53 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[13304], 5.00th=[15401], 10.00th=[17171], 20.00th=[22152], 00:37:22.976 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.976 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25297], 95.00th=[31327], 00:37:22.976 | 99.00th=[38536], 99.50th=[39584], 99.90th=[41157], 99.95th=[41681], 00:37:22.976 | 99.99th=[51643] 00:37:22.976 bw ( KiB/s): min= 2560, max= 3113, per=4.24%, avg=2727.53, stdev=142.20, samples=19 00:37:22.976 iops : min= 640, max= 778, avg=681.68, stdev=35.55, samples=19 00:37:22.976 lat (msec) : 10=0.38%, 20=16.10%, 50=83.48%, 100=0.03% 00:37:22.976 cpu : usr=98.30%, sys=1.22%, ctx=152, majf=0, minf=24 00:37:22.976 IO depths : 1=2.2%, 2=5.7%, 4=17.2%, 8=63.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=91.8%, 8=3.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename1: (groupid=0, jobs=1): err= 0: pid=2443826: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10019msec) 00:37:22.976 slat (nsec): min=5698, max=66121, avg=14939.18, stdev=9380.26 00:37:22.976 clat (usec): min=12520, max=35899, avg=23961.17, stdev=1520.09 00:37:22.976 lat (usec): min=12526, max=35912, avg=23976.10, stdev=1520.78 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[16712], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:37:22.976 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:22.976 | 99.00th=[28967], 99.50th=[31065], 99.90th=[33817], 99.95th=[35390], 00:37:22.976 | 99.99th=[35914] 00:37:22.976 bw ( KiB/s): min= 2560, max= 2733, per=4.13%, avg=2656.15, stdev=55.38, samples=20 00:37:22.976 iops : min= 640, max= 683, avg=663.85, stdev=13.80, samples=20 00:37:22.976 lat (msec) : 20=2.03%, 50=97.97% 00:37:22.976 cpu : usr=98.74%, sys=0.99%, ctx=16, majf=0, minf=34 00:37:22.976 IO depths : 1=4.7%, 2=10.9%, 4=24.9%, 8=51.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:37:22.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.976 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.976 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.976 filename1: (groupid=0, jobs=1): err= 0: pid=2443827: Fri Dec 6 18:49:15 2024 00:37:22.976 read: IOPS=696, BW=2785KiB/s (2852kB/s)(27.2MiB/10002msec) 00:37:22.976 slat (nsec): min=5665, max=96737, avg=15424.55, stdev=12754.25 00:37:22.976 clat (usec): min=7596, max=41567, avg=22880.84, stdev=4234.77 00:37:22.976 lat (usec): min=7609, max=41590, avg=22896.27, stdev=4237.22 00:37:22.976 clat percentiles (usec): 00:37:22.976 | 1.00th=[11731], 5.00th=[15139], 10.00th=[16909], 20.00th=[19792], 00:37:22.976 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:22.976 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25297], 95.00th=[28705], 00:37:22.976 | 99.00th=[38011], 99.50th=[39584], 99.90th=[40633], 99.95th=[41157], 00:37:22.976 | 99.99th=[41681] 00:37:22.976 bw ( KiB/s): min= 2560, max= 2960, per=4.33%, avg=2786.00, stdev=103.41, samples=19 00:37:22.976 iops : min= 640, max= 740, avg=696.32, stdev=25.93, samples=19 00:37:22.976 lat (msec) : 10=0.42%, 20=20.12%, 50=79.47% 00:37:22.976 cpu : usr=98.78%, sys=0.93%, ctx=25, majf=0, minf=37 00:37:22.977 IO depths : 1=1.2%, 2=2.6%, 4=11.8%, 8=72.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=90.8%, 8=4.5%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename1: (groupid=0, jobs=1): err= 0: pid=2443828: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=664, BW=2659KiB/s (2722kB/s)(26.0MiB/10008msec) 00:37:22.977 slat (usec): min=5, max=101, avg=22.60, stdev=15.71 00:37:22.977 clat (usec): min=10358, max=40225, avg=23858.06, stdev=2055.24 00:37:22.977 lat (usec): min=10390, max=40244, avg=23880.65, stdev=2055.68 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[14615], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:37:22.977 | 99.00th=[32375], 99.50th=[34866], 99.90th=[39584], 99.95th=[40109], 00:37:22.977 | 99.99th=[40109] 00:37:22.977 bw ( KiB/s): min= 2554, max= 2816, per=4.12%, avg=2650.68, stdev=70.63, samples=19 00:37:22.977 iops : min= 638, max= 704, avg=662.47, stdev=17.64, samples=19 00:37:22.977 lat (msec) : 20=3.01%, 50=96.99% 00:37:22.977 cpu : usr=98.89%, sys=0.82%, ctx=29, majf=0, minf=39 00:37:22.977 IO depths : 1=5.2%, 2=11.2%, 4=24.3%, 8=52.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename1: (groupid=0, jobs=1): err= 0: pid=2443829: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10005msec) 00:37:22.977 slat (usec): min=5, max=102, avg=24.95, stdev=16.22 00:37:22.977 clat (usec): min=9608, max=40933, avg=23873.68, stdev=1554.06 00:37:22.977 lat (usec): min=9614, max=40939, avg=23898.63, stdev=1553.88 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[20055], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.977 | 99.00th=[25560], 99.50th=[30540], 99.90th=[40109], 99.95th=[40109], 00:37:22.977 | 99.99th=[41157] 00:37:22.977 bw ( KiB/s): min= 2554, max= 2688, per=4.11%, avg=2645.32, stdev=58.70, samples=19 00:37:22.977 iops : min= 638, max= 672, avg=661.11, stdev=14.67, samples=19 00:37:22.977 lat (msec) : 10=0.21%, 20=0.78%, 50=99.01% 00:37:22.977 cpu : usr=98.34%, sys=1.06%, ctx=353, majf=0, minf=24 00:37:22.977 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename1: (groupid=0, jobs=1): err= 0: pid=2443830: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=684, BW=2738KiB/s (2804kB/s)(26.8MiB/10009msec) 00:37:22.977 slat (usec): min=5, max=108, avg=19.93, stdev=16.46 00:37:22.977 clat (usec): min=10768, max=46527, avg=23218.58, stdev=4409.26 00:37:22.977 lat (usec): min=10783, max=46546, avg=23238.51, stdev=4412.10 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[13042], 5.00th=[15533], 10.00th=[16712], 20.00th=[20317], 00:37:22.977 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26084], 95.00th=[31589], 00:37:22.977 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44303], 99.95th=[46400], 00:37:22.977 | 99.99th=[46400] 00:37:22.977 bw ( KiB/s): min= 2432, max= 3049, per=4.24%, avg=2728.68, stdev=147.88, samples=19 00:37:22.977 iops : min= 608, max= 762, avg=682.00, stdev=36.95, samples=19 00:37:22.977 lat (msec) : 20=18.58%, 50=81.42% 00:37:22.977 cpu : usr=98.84%, sys=0.84%, ctx=69, majf=0, minf=20 00:37:22.977 IO depths : 1=2.6%, 2=5.5%, 4=14.3%, 8=66.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=91.4%, 8=3.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename1: (groupid=0, jobs=1): err= 0: pid=2443831: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10014msec) 00:37:22.977 slat (nsec): min=5668, max=97042, avg=18934.24, stdev=14504.74 00:37:22.977 clat (usec): min=11230, max=40451, avg=23820.54, stdev=1801.89 00:37:22.977 lat (usec): min=11236, max=40482, avg=23839.47, stdev=1802.12 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[15270], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.977 | 99.00th=[25560], 99.50th=[34866], 99.90th=[39584], 99.95th=[40633], 00:37:22.977 | 99.99th=[40633] 00:37:22.977 bw ( KiB/s): min= 2554, max= 2816, per=4.13%, avg=2657.47, stdev=66.74, samples=19 00:37:22.977 iops : min= 638, max= 704, avg=664.21, stdev=16.69, samples=19 00:37:22.977 lat (msec) : 20=2.99%, 50=97.01% 00:37:22.977 cpu : usr=99.00%, sys=0.70%, ctx=39, majf=0, minf=23 00:37:22.977 IO depths : 1=5.3%, 2=11.3%, 4=24.2%, 8=52.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename1: (groupid=0, jobs=1): err= 0: pid=2443832: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10029msec) 00:37:22.977 slat (nsec): min=5687, max=85235, avg=14036.37, stdev=10570.39 00:37:22.977 clat (usec): min=7314, max=32700, avg=23859.46, stdev=1781.76 00:37:22.977 lat (usec): min=7326, max=32713, avg=23873.50, stdev=1781.12 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[13829], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.977 | 99.00th=[26084], 99.50th=[29492], 99.90th=[32113], 99.95th=[32637], 00:37:22.977 | 99.99th=[32637] 00:37:22.977 bw ( KiB/s): min= 2554, max= 2944, per=4.14%, avg=2667.55, stdev=86.09, samples=20 00:37:22.977 iops : min= 638, max= 736, avg=666.75, stdev=21.54, samples=20 00:37:22.977 lat (msec) : 10=0.48%, 20=1.50%, 50=98.03% 00:37:22.977 cpu : usr=98.87%, sys=0.81%, ctx=58, majf=0, minf=25 00:37:22.977 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename2: (groupid=0, jobs=1): err= 0: pid=2443833: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=681, BW=2727KiB/s (2792kB/s)(26.7MiB/10016msec) 00:37:22.977 slat (nsec): min=5681, max=78732, avg=12705.10, stdev=10250.70 00:37:22.977 clat (usec): min=7306, max=41210, avg=23367.78, stdev=3358.79 00:37:22.977 lat (usec): min=7344, max=41219, avg=23380.49, stdev=3359.16 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[10159], 5.00th=[16188], 10.00th=[19792], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:37:22.977 | 99.00th=[33162], 99.50th=[35390], 99.90th=[41157], 99.95th=[41157], 00:37:22.977 | 99.99th=[41157] 00:37:22.977 bw ( KiB/s): min= 2554, max= 3200, per=4.23%, avg=2723.50, stdev=163.12, samples=20 00:37:22.977 iops : min= 638, max= 800, avg=680.70, stdev=40.78, samples=20 00:37:22.977 lat (msec) : 10=0.95%, 20=9.36%, 50=89.69% 00:37:22.977 cpu : usr=98.26%, sys=1.19%, ctx=123, majf=0, minf=59 00:37:22.977 IO depths : 1=4.0%, 2=9.4%, 4=22.7%, 8=55.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename2: (groupid=0, jobs=1): err= 0: pid=2443834: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:37:22.977 slat (nsec): min=5359, max=96360, avg=21929.06, stdev=13934.72 00:37:22.977 clat (usec): min=9727, max=39968, avg=23902.11, stdev=1350.40 00:37:22.977 lat (usec): min=9733, max=39983, avg=23924.03, stdev=1350.03 00:37:22.977 clat percentiles (usec): 00:37:22.977 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.977 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.977 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.977 | 99.00th=[25822], 99.50th=[26084], 99.90th=[40109], 99.95th=[40109], 00:37:22.977 | 99.99th=[40109] 00:37:22.977 bw ( KiB/s): min= 2554, max= 2688, per=4.11%, avg=2645.32, stdev=59.75, samples=19 00:37:22.977 iops : min= 638, max= 672, avg=661.11, stdev=14.88, samples=19 00:37:22.977 lat (msec) : 10=0.03%, 20=0.81%, 50=99.16% 00:37:22.977 cpu : usr=98.86%, sys=0.84%, ctx=30, majf=0, minf=30 00:37:22.977 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:37:22.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.977 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.977 filename2: (groupid=0, jobs=1): err= 0: pid=2443835: Fri Dec 6 18:49:15 2024 00:37:22.977 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.2MiB/10004msec) 00:37:22.977 slat (usec): min=5, max=112, avg=17.93, stdev=14.50 00:37:22.977 clat (usec): min=8463, max=42037, avg=23703.91, stdev=4266.22 00:37:22.977 lat (usec): min=8476, max=42054, avg=23721.84, stdev=4267.04 00:37:22.977 clat percentiles (usec): 00:37:22.978 | 1.00th=[11731], 5.00th=[16188], 10.00th=[18482], 20.00th=[22938], 00:37:22.978 | 30.00th=[23462], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.978 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27919], 95.00th=[31851], 00:37:22.978 | 99.00th=[38536], 99.50th=[40109], 99.90th=[42206], 99.95th=[42206], 00:37:22.978 | 99.99th=[42206] 00:37:22.978 bw ( KiB/s): min= 2480, max= 2880, per=4.16%, avg=2674.37, stdev=113.11, samples=19 00:37:22.978 iops : min= 620, max= 720, avg=668.37, stdev=28.28, samples=19 00:37:22.978 lat (msec) : 10=0.24%, 20=14.36%, 50=85.41% 00:37:22.978 cpu : usr=98.44%, sys=1.09%, ctx=128, majf=0, minf=46 00:37:22.978 IO depths : 1=1.4%, 2=4.8%, 4=15.7%, 8=66.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=92.0%, 8=3.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 filename2: (groupid=0, jobs=1): err= 0: pid=2443836: Fri Dec 6 18:49:15 2024 00:37:22.978 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.0MiB/10007msec) 00:37:22.978 slat (usec): min=5, max=100, avg=22.17, stdev=15.04 00:37:22.978 clat (usec): min=10202, max=36416, avg=23818.73, stdev=1818.87 00:37:22.978 lat (usec): min=10210, max=36425, avg=23840.90, stdev=1818.86 00:37:22.978 clat percentiles (usec): 00:37:22.978 | 1.00th=[15664], 5.00th=[22676], 10.00th=[23200], 20.00th=[23462], 00:37:22.978 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.978 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.978 | 99.00th=[28705], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:37:22.978 | 99.99th=[36439] 00:37:22.978 bw ( KiB/s): min= 2560, max= 2816, per=4.12%, avg=2652.37, stdev=70.54, samples=19 00:37:22.978 iops : min= 640, max= 704, avg=662.89, stdev=17.57, samples=19 00:37:22.978 lat (msec) : 20=2.71%, 50=97.29% 00:37:22.978 cpu : usr=98.18%, sys=1.19%, ctx=224, majf=0, minf=30 00:37:22.978 IO depths : 1=5.9%, 2=11.9%, 4=23.9%, 8=51.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 filename2: (groupid=0, jobs=1): err= 0: pid=2443837: Fri Dec 6 18:49:15 2024 00:37:22.978 read: IOPS=667, BW=2670KiB/s (2734kB/s)(26.1MiB/10020msec) 00:37:22.978 slat (nsec): min=5689, max=99989, avg=12641.81, stdev=9323.38 00:37:22.978 clat (usec): min=7389, max=27257, avg=23863.21, stdev=1560.67 00:37:22.978 lat (usec): min=7398, max=27273, avg=23875.85, stdev=1560.11 00:37:22.978 clat percentiles (usec): 00:37:22.978 | 1.00th=[14091], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:37:22.978 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.978 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.978 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[27132], 00:37:22.978 | 99.99th=[27132] 00:37:22.978 bw ( KiB/s): min= 2554, max= 2944, per=4.14%, avg=2667.55, stdev=86.09, samples=20 00:37:22.978 iops : min= 638, max= 736, avg=666.75, stdev=21.54, samples=20 00:37:22.978 lat (msec) : 10=0.48%, 20=0.99%, 50=98.53% 00:37:22.978 cpu : usr=99.00%, sys=0.74%, ctx=12, majf=0, minf=40 00:37:22.978 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 filename2: (groupid=0, jobs=1): err= 0: pid=2443838: Fri Dec 6 18:49:15 2024 00:37:22.978 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:37:22.978 slat (usec): min=5, max=110, avg=18.04, stdev=14.77 00:37:22.978 clat (usec): min=7370, max=40151, avg=23982.59, stdev=2875.34 00:37:22.978 lat (usec): min=7377, max=40168, avg=24000.62, stdev=2876.26 00:37:22.978 clat percentiles (usec): 00:37:22.978 | 1.00th=[14746], 5.00th=[19530], 10.00th=[23200], 20.00th=[23462], 00:37:22.978 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:37:22.978 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26870], 00:37:22.978 | 99.00th=[35390], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:37:22.978 | 99.99th=[40109] 00:37:22.978 bw ( KiB/s): min= 2546, max= 2746, per=4.11%, avg=2642.79, stdev=48.70, samples=19 00:37:22.978 iops : min= 636, max= 686, avg=660.47, stdev=12.14, samples=19 00:37:22.978 lat (msec) : 10=0.09%, 20=5.56%, 50=94.35% 00:37:22.978 cpu : usr=98.86%, sys=0.85%, ctx=15, majf=0, minf=38 00:37:22.978 IO depths : 1=0.9%, 2=3.5%, 4=12.5%, 8=69.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=91.7%, 8=4.8%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 filename2: (groupid=0, jobs=1): err= 0: pid=2443839: Fri Dec 6 18:49:15 2024 00:37:22.978 read: IOPS=656, BW=2626KiB/s (2689kB/s)(25.7MiB/10004msec) 00:37:22.978 slat (nsec): min=5658, max=84074, avg=15590.60, stdev=12085.20 00:37:22.978 clat (usec): min=9056, max=54328, avg=24300.20, stdev=3400.45 00:37:22.978 lat (usec): min=9071, max=54347, avg=24315.79, stdev=3400.59 00:37:22.978 clat percentiles (usec): 00:37:22.978 | 1.00th=[15664], 5.00th=[18220], 10.00th=[20841], 20.00th=[23462], 00:37:22.978 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:22.978 | 70.00th=[24511], 80.00th=[25035], 90.00th=[27657], 95.00th=[30802], 00:37:22.978 | 99.00th=[34866], 99.50th=[38011], 99.90th=[42730], 99.95th=[54264], 00:37:22.978 | 99.99th=[54264] 00:37:22.978 bw ( KiB/s): min= 2448, max= 2752, per=4.07%, avg=2619.26, stdev=70.57, samples=19 00:37:22.978 iops : min= 612, max= 688, avg=654.58, stdev=17.71, samples=19 00:37:22.978 lat (msec) : 10=0.11%, 20=8.04%, 50=91.78%, 100=0.08% 00:37:22.978 cpu : usr=97.02%, sys=1.86%, ctx=319, majf=0, minf=24 00:37:22.978 IO depths : 1=0.2%, 2=0.3%, 4=2.7%, 8=80.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 filename2: (groupid=0, jobs=1): err= 0: pid=2443840: Fri Dec 6 18:49:15 2024 00:37:22.978 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10002msec) 00:37:22.978 slat (usec): min=5, max=101, avg=23.14, stdev=15.54 00:37:22.978 clat (usec): min=12571, max=32759, avg=23883.05, stdev=873.16 00:37:22.978 lat (usec): min=12587, max=32765, avg=23906.19, stdev=872.52 00:37:22.978 clat percentiles (usec): 00:37:22.978 | 1.00th=[22414], 5.00th=[22938], 10.00th=[23200], 20.00th=[23462], 00:37:22.978 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:37:22.978 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:22.978 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[30802], 00:37:22.978 | 99.99th=[32637] 00:37:22.978 bw ( KiB/s): min= 2554, max= 2688, per=4.12%, avg=2652.42, stdev=58.29, samples=19 00:37:22.978 iops : min= 638, max= 672, avg=662.95, stdev=14.61, samples=19 00:37:22.978 lat (msec) : 20=0.57%, 50=99.43% 00:37:22.978 cpu : usr=98.38%, sys=1.06%, ctx=139, majf=0, minf=23 00:37:22.978 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:22.978 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:22.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:22.978 00:37:22.978 Run status group 0 (all jobs): 00:37:22.978 READ: bw=62.8MiB/s (65.9MB/s), 2626KiB/s-2785KiB/s (2689kB/s-2852kB/s), io=630MiB (661MB), run=10002-10029msec 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.978 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 bdev_null0 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 [2024-12-06 18:49:16.225390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 bdev_null1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.979 { 00:37:22.979 "params": { 00:37:22.979 "name": "Nvme$subsystem", 00:37:22.979 "trtype": "$TEST_TRANSPORT", 00:37:22.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.979 "adrfam": "ipv4", 00:37:22.979 "trsvcid": "$NVMF_PORT", 00:37:22.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.979 "hdgst": ${hdgst:-false}, 00:37:22.979 "ddgst": ${ddgst:-false} 00:37:22.979 }, 00:37:22.979 "method": "bdev_nvme_attach_controller" 00:37:22.979 } 00:37:22.979 EOF 00:37:22.979 )") 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:22.979 { 00:37:22.979 "params": { 00:37:22.979 "name": "Nvme$subsystem", 00:37:22.979 "trtype": "$TEST_TRANSPORT", 00:37:22.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:22.979 "adrfam": "ipv4", 00:37:22.979 "trsvcid": "$NVMF_PORT", 00:37:22.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:22.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:22.979 "hdgst": ${hdgst:-false}, 00:37:22.979 "ddgst": ${ddgst:-false} 00:37:22.979 }, 00:37:22.979 "method": "bdev_nvme_attach_controller" 00:37:22.979 } 00:37:22.979 EOF 00:37:22.979 )") 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:22.979 18:49:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:22.979 "params": { 00:37:22.979 "name": "Nvme0", 00:37:22.979 "trtype": "tcp", 00:37:22.979 "traddr": "10.0.0.2", 00:37:22.979 "adrfam": "ipv4", 00:37:22.980 "trsvcid": "4420", 00:37:22.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:22.980 "hdgst": false, 00:37:22.980 "ddgst": false 00:37:22.980 }, 00:37:22.980 "method": "bdev_nvme_attach_controller" 00:37:22.980 },{ 00:37:22.980 "params": { 00:37:22.980 "name": "Nvme1", 00:37:22.980 "trtype": "tcp", 00:37:22.980 "traddr": "10.0.0.2", 00:37:22.980 "adrfam": "ipv4", 00:37:22.980 "trsvcid": "4420", 00:37:22.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:22.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:22.980 "hdgst": false, 00:37:22.980 "ddgst": false 00:37:22.980 }, 00:37:22.980 "method": "bdev_nvme_attach_controller" 00:37:22.980 }' 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:22.980 18:49:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:22.980 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.980 ... 00:37:22.980 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:22.980 ... 00:37:22.980 fio-3.35 00:37:22.980 Starting 4 threads 00:37:28.269 00:37:28.269 filename0: (groupid=0, jobs=1): err= 0: pid=2446027: Fri Dec 6 18:49:22 2024 00:37:28.269 read: IOPS=2969, BW=23.2MiB/s (24.3MB/s)(116MiB/5002msec) 00:37:28.269 slat (nsec): min=5497, max=35473, avg=7868.16, stdev=1274.17 00:37:28.269 clat (usec): min=1147, max=44341, avg=2674.02, stdev=1409.74 00:37:28.269 lat (usec): min=1153, max=44351, avg=2681.88, stdev=1409.89 00:37:28.269 clat percentiles (usec): 00:37:28.269 | 1.00th=[ 1860], 5.00th=[ 2024], 10.00th=[ 2180], 20.00th=[ 2311], 00:37:28.269 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:28.269 | 70.00th=[ 2704], 80.00th=[ 2900], 90.00th=[ 3032], 95.00th=[ 3425], 00:37:28.269 | 99.00th=[ 3687], 99.50th=[ 3785], 99.90th=[43779], 99.95th=[44303], 00:37:28.269 | 99.99th=[44303] 00:37:28.269 bw ( KiB/s): min=20032, max=24352, per=25.26%, avg=23722.67, stdev=1388.96, samples=9 00:37:28.269 iops : min= 2504, max= 3044, avg=2965.33, stdev=173.62, samples=9 00:37:28.269 lat (msec) : 2=2.52%, 4=97.29%, 10=0.07%, 50=0.11% 00:37:28.269 cpu : usr=96.80%, sys=2.94%, ctx=8, majf=0, minf=81 00:37:28.269 IO depths : 1=0.1%, 2=0.4%, 4=69.8%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 issued rwts: total=14853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.269 filename0: (groupid=0, jobs=1): err= 0: pid=2446028: Fri Dec 6 18:49:22 2024 00:37:28.269 read: IOPS=2879, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:37:28.269 slat (nsec): min=5502, max=65609, avg=6766.04, stdev=1945.46 00:37:28.269 clat (usec): min=1371, max=4684, avg=2759.56, stdev=249.61 00:37:28.269 lat (usec): min=1384, max=4693, avg=2766.32, stdev=249.48 00:37:28.269 clat percentiles (usec): 00:37:28.269 | 1.00th=[ 2040], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:28.269 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:28.269 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3097], 00:37:28.269 | 99.00th=[ 3720], 99.50th=[ 4047], 99.90th=[ 4359], 99.95th=[ 4359], 00:37:28.269 | 99.99th=[ 4686] 00:37:28.269 bw ( KiB/s): min=22816, max=23454, per=24.53%, avg=23038.00, stdev=179.10, samples=9 00:37:28.269 iops : min= 2852, max= 2931, avg=2879.67, stdev=22.17, samples=9 00:37:28.269 lat (msec) : 2=0.65%, 4=98.77%, 10=0.58% 00:37:28.269 cpu : usr=96.52%, sys=3.22%, ctx=6, majf=0, minf=90 00:37:28.269 IO depths : 1=0.1%, 2=0.1%, 4=73.7%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 issued rwts: total=14401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.269 filename1: (groupid=0, jobs=1): err= 0: pid=2446030: Fri Dec 6 18:49:22 2024 00:37:28.269 read: IOPS=2884, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:37:28.269 slat (nsec): min=5493, max=42191, avg=6087.42, stdev=1606.06 00:37:28.269 clat (usec): min=1240, max=5614, avg=2756.66, stdev=265.84 00:37:28.269 lat (usec): min=1245, max=5620, avg=2762.75, stdev=265.71 00:37:28.269 clat percentiles (usec): 00:37:28.269 | 1.00th=[ 2008], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:28.269 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:37:28.269 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2966], 95.00th=[ 3097], 00:37:28.269 | 99.00th=[ 3785], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[ 4621], 00:37:28.269 | 99.99th=[ 5604] 00:37:28.269 bw ( KiB/s): min=22880, max=23728, per=24.60%, avg=23096.89, stdev=252.74, samples=9 00:37:28.269 iops : min= 2860, max= 2966, avg=2887.11, stdev=31.59, samples=9 00:37:28.269 lat (msec) : 2=0.91%, 4=98.44%, 10=0.65% 00:37:28.269 cpu : usr=96.52%, sys=3.24%, ctx=7, majf=0, minf=68 00:37:28.269 IO depths : 1=0.1%, 2=0.2%, 4=72.9%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 issued rwts: total=14427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.269 filename1: (groupid=0, jobs=1): err= 0: pid=2446032: Fri Dec 6 18:49:22 2024 00:37:28.269 read: IOPS=3005, BW=23.5MiB/s (24.6MB/s)(117MiB/5001msec) 00:37:28.269 slat (nsec): min=5485, max=52072, avg=8380.63, stdev=3344.43 00:37:28.269 clat (usec): min=846, max=4684, avg=2639.87, stdev=362.43 00:37:28.269 lat (usec): min=854, max=4692, avg=2648.25, stdev=362.29 00:37:28.269 clat percentiles (usec): 00:37:28.269 | 1.00th=[ 1762], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2376], 00:37:28.269 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2704], 00:37:28.269 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3064], 95.00th=[ 3359], 00:37:28.269 | 99.00th=[ 3720], 99.50th=[ 3818], 99.90th=[ 4047], 99.95th=[ 4359], 00:37:28.269 | 99.99th=[ 4686] 00:37:28.269 bw ( KiB/s): min=23504, max=25170, per=25.57%, avg=24012.67, stdev=490.47, samples=9 00:37:28.269 iops : min= 2938, max= 3146, avg=3001.56, stdev=61.24, samples=9 00:37:28.269 lat (usec) : 1000=0.03% 00:37:28.269 lat (msec) : 2=2.70%, 4=97.08%, 10=0.19% 00:37:28.269 cpu : usr=91.72%, sys=5.40%, ctx=281, majf=0, minf=87 00:37:28.269 IO depths : 1=0.1%, 2=0.3%, 4=70.4%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.269 issued rwts: total=15033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.269 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.269 00:37:28.269 Run status group 0 (all jobs): 00:37:28.269 READ: bw=91.7MiB/s (96.2MB/s), 22.5MiB/s-23.5MiB/s (23.6MB/s-24.6MB/s), io=459MiB (481MB), run=5001-5002msec 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.269 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 00:37:28.270 real 0m24.507s 00:37:28.270 user 5m14.396s 00:37:28.270 sys 0m4.953s 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 ************************************ 00:37:28.270 END TEST fio_dif_rand_params 00:37:28.270 ************************************ 00:37:28.270 18:49:22 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:28.270 18:49:22 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:28.270 18:49:22 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 ************************************ 00:37:28.270 START TEST fio_dif_digest 00:37:28.270 ************************************ 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 bdev_null0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.270 [2024-12-06 18:49:22.930036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.270 { 00:37:28.270 "params": { 00:37:28.270 "name": "Nvme$subsystem", 00:37:28.270 "trtype": "$TEST_TRANSPORT", 00:37:28.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.270 "adrfam": "ipv4", 00:37:28.270 "trsvcid": "$NVMF_PORT", 00:37:28.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.270 "hdgst": ${hdgst:-false}, 00:37:28.270 "ddgst": ${ddgst:-false} 00:37:28.270 }, 00:37:28.270 "method": "bdev_nvme_attach_controller" 00:37:28.270 } 00:37:28.270 EOF 00:37:28.270 )") 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.270 "params": { 00:37:28.270 "name": "Nvme0", 00:37:28.270 "trtype": "tcp", 00:37:28.270 "traddr": "10.0.0.2", 00:37:28.270 "adrfam": "ipv4", 00:37:28.270 "trsvcid": "4420", 00:37:28.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.270 "hdgst": true, 00:37:28.270 "ddgst": true 00:37:28.270 }, 00:37:28.270 "method": "bdev_nvme_attach_controller" 00:37:28.270 }' 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:28.270 18:49:22 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.270 18:49:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.270 18:49:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.270 18:49:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:28.270 18:49:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.840 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:28.840 ... 00:37:28.840 fio-3.35 00:37:28.840 Starting 3 threads 00:37:41.068 00:37:41.068 filename0: (groupid=0, jobs=1): err= 0: pid=2447538: Fri Dec 6 18:49:33 2024 00:37:41.068 read: IOPS=318, BW=39.9MiB/s (41.8MB/s)(400MiB/10046msec) 00:37:41.068 slat (nsec): min=5907, max=32371, avg=8267.91, stdev=1700.18 00:37:41.068 clat (usec): min=6491, max=48542, avg=9385.37, stdev=1202.97 00:37:41.068 lat (usec): min=6497, max=48549, avg=9393.64, stdev=1203.02 00:37:41.068 clat percentiles (usec): 00:37:41.068 | 1.00th=[ 7635], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 8717], 00:37:41.068 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:37:41.068 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:37:41.068 | 99.00th=[11076], 99.50th=[11338], 99.90th=[11863], 99.95th=[45876], 00:37:41.068 | 99.99th=[48497] 00:37:41.068 bw ( KiB/s): min=39168, max=42496, per=36.55%, avg=40972.80, stdev=758.85, samples=20 00:37:41.068 iops : min= 306, max= 332, avg=320.10, stdev= 5.93, samples=20 00:37:41.068 lat (msec) : 10=81.58%, 20=18.36%, 50=0.06% 00:37:41.068 cpu : usr=94.61%, sys=5.14%, ctx=17, majf=0, minf=164 00:37:41.068 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 issued rwts: total=3203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:41.068 filename0: (groupid=0, jobs=1): err= 0: pid=2447539: Fri Dec 6 18:49:33 2024 00:37:41.068 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(336MiB/10045msec) 00:37:41.068 slat (nsec): min=5858, max=31624, avg=8606.92, stdev=1905.68 00:37:41.068 clat (usec): min=8566, max=53030, avg=11192.59, stdev=1348.41 00:37:41.068 lat (usec): min=8575, max=53037, avg=11201.20, stdev=1348.37 00:37:41.068 clat percentiles (usec): 00:37:41.068 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:37:41.068 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:37:41.068 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:37:41.068 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14615], 99.95th=[45876], 00:37:41.068 | 99.99th=[53216] 00:37:41.068 bw ( KiB/s): min=33280, max=35328, per=30.65%, avg=34355.20, stdev=495.57, samples=20 00:37:41.068 iops : min= 260, max= 276, avg=268.40, stdev= 3.87, samples=20 00:37:41.068 lat (msec) : 10=8.27%, 20=91.66%, 50=0.04%, 100=0.04% 00:37:41.068 cpu : usr=94.40%, sys=5.33%, ctx=21, majf=0, minf=113 00:37:41.068 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 issued rwts: total=2686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:41.068 filename0: (groupid=0, jobs=1): err= 0: pid=2447540: Fri Dec 6 18:49:33 2024 00:37:41.068 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(364MiB/10048msec) 00:37:41.068 slat (nsec): min=5863, max=32285, avg=8858.67, stdev=1494.51 00:37:41.068 clat (usec): min=7503, max=49906, avg=10330.46, stdev=1315.76 00:37:41.068 lat (usec): min=7512, max=49913, avg=10339.32, stdev=1315.71 00:37:41.068 clat percentiles (usec): 00:37:41.068 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:37:41.068 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:37:41.068 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:37:41.068 | 99.00th=[12387], 99.50th=[12649], 99.90th=[14615], 99.95th=[48497], 00:37:41.068 | 99.99th=[50070] 00:37:41.068 bw ( KiB/s): min=36096, max=38144, per=33.20%, avg=37222.40, stdev=527.92, samples=20 00:37:41.068 iops : min= 282, max= 298, avg=290.80, stdev= 4.12, samples=20 00:37:41.068 lat (msec) : 10=35.93%, 20=64.00%, 50=0.07% 00:37:41.068 cpu : usr=94.67%, sys=5.08%, ctx=38, majf=0, minf=101 00:37:41.068 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:41.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:41.068 issued rwts: total=2911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:41.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:41.068 00:37:41.068 Run status group 0 (all jobs): 00:37:41.068 READ: bw=109MiB/s (115MB/s), 33.4MiB/s-39.9MiB/s (35.0MB/s-41.8MB/s), io=1100MiB (1153MB), run=10045-10048msec 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:41.068 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.069 00:37:41.069 real 0m11.196s 00:37:41.069 user 0m40.161s 00:37:41.069 sys 0m1.872s 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:41.069 18:49:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:41.069 ************************************ 00:37:41.069 END TEST fio_dif_digest 00:37:41.069 ************************************ 00:37:41.069 18:49:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:41.069 18:49:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:41.069 rmmod nvme_tcp 00:37:41.069 rmmod nvme_fabrics 00:37:41.069 rmmod nvme_keyring 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2437127 ']' 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2437127 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2437127 ']' 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2437127 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437127 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437127' 00:37:41.069 killing process with pid 2437127 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2437127 00:37:41.069 18:49:34 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2437127 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:41.069 18:49:34 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:42.985 Waiting for block devices as requested 00:37:43.247 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:43.247 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:43.247 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:43.508 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:43.508 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:43.508 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:43.508 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:43.768 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:43.768 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:44.029 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:44.029 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:44.029 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:44.289 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:44.289 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:44.289 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:44.550 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:44.550 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:44.811 18:49:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.811 18:49:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:44.811 18:49:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.356 18:49:41 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:47.356 00:37:47.356 real 1m18.328s 00:37:47.356 user 7m55.413s 00:37:47.356 sys 0m22.453s 00:37:47.356 18:49:41 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:47.356 18:49:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:47.356 ************************************ 00:37:47.356 END TEST nvmf_dif 00:37:47.356 ************************************ 00:37:47.357 18:49:41 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:47.357 18:49:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:47.357 18:49:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:47.357 18:49:41 -- common/autotest_common.sh@10 -- # set +x 00:37:47.357 ************************************ 00:37:47.357 START TEST nvmf_abort_qd_sizes 00:37:47.357 ************************************ 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:47.357 * Looking for test storage... 00:37:47.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.357 --rc genhtml_branch_coverage=1 00:37:47.357 --rc genhtml_function_coverage=1 00:37:47.357 --rc genhtml_legend=1 00:37:47.357 --rc geninfo_all_blocks=1 00:37:47.357 --rc geninfo_unexecuted_blocks=1 00:37:47.357 00:37:47.357 ' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.357 --rc genhtml_branch_coverage=1 00:37:47.357 --rc genhtml_function_coverage=1 00:37:47.357 --rc genhtml_legend=1 00:37:47.357 --rc geninfo_all_blocks=1 00:37:47.357 --rc geninfo_unexecuted_blocks=1 00:37:47.357 00:37:47.357 ' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.357 --rc genhtml_branch_coverage=1 00:37:47.357 --rc genhtml_function_coverage=1 00:37:47.357 --rc genhtml_legend=1 00:37:47.357 --rc geninfo_all_blocks=1 00:37:47.357 --rc geninfo_unexecuted_blocks=1 00:37:47.357 00:37:47.357 ' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:47.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:47.357 --rc genhtml_branch_coverage=1 00:37:47.357 --rc genhtml_function_coverage=1 00:37:47.357 --rc genhtml_legend=1 00:37:47.357 --rc geninfo_all_blocks=1 00:37:47.357 --rc geninfo_unexecuted_blocks=1 00:37:47.357 00:37:47.357 ' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:47.357 18:49:41 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:47.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:47.358 18:49:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:55.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:55.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:55.504 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:55.505 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:55.505 18:49:48 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:55.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:55.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:55.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:37:55.505 00:37:55.505 --- 10.0.0.2 ping statistics --- 00:37:55.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.505 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:55.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:55.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:37:55.505 00:37:55.505 --- 10.0.0.1 ping statistics --- 00:37:55.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:55.505 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:55.505 18:49:49 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:58.076 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:58.076 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:58.337 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.597 18:49:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:58.857 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2456980 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2456980 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2456980 ']' 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.858 18:49:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:58.858 [2024-12-06 18:49:53.446456] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:37:58.858 [2024-12-06 18:49:53.446515] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.858 [2024-12-06 18:49:53.544536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:58.858 [2024-12-06 18:49:53.600360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.858 [2024-12-06 18:49:53.600413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.858 [2024-12-06 18:49:53.600422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.858 [2024-12-06 18:49:53.600430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.858 [2024-12-06 18:49:53.600436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.858 [2024-12-06 18:49:53.602774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.858 [2024-12-06 18:49:53.602919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.858 [2024-12-06 18:49:53.603066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.858 [2024-12-06 18:49:53.603066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:59.800 18:49:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:59.800 ************************************ 00:37:59.800 START TEST spdk_target_abort 00:37:59.800 ************************************ 00:37:59.800 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:59.800 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:59.800 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:59.800 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.800 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.061 spdk_targetn1 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.061 [2024-12-06 18:49:54.690797] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:00.061 [2024-12-06 18:49:54.743228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:00.061 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:00.062 18:49:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:00.322 [2024-12-06 18:49:54.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:00.322 [2024-12-06 18:49:54.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:38:00.322 [2024-12-06 18:49:54.898196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:216 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:00.322 [2024-12-06 18:49:54.898226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001f p:1 m:0 dnr:0 00:38:00.322 [2024-12-06 18:49:54.961260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2080 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:38:00.322 [2024-12-06 18:49:54.961295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:38:00.322 [2024-12-06 18:49:55.017218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3728 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:38:00.322 [2024-12-06 18:49:55.017252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00d4 p:0 m:0 dnr:0 00:38:03.621 Initializing NVMe Controllers 00:38:03.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:03.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:03.621 Initialization complete. Launching workers. 00:38:03.621 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10993, failed: 4 00:38:03.621 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2771, failed to submit 8226 00:38:03.621 success 747, unsuccessful 2024, failed 0 00:38:03.621 18:49:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:03.621 18:49:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:03.621 [2024-12-06 18:49:58.191786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1192 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:38:03.621 [2024-12-06 18:49:58.191826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:009a p:1 m:0 dnr:0 00:38:03.621 [2024-12-06 18:49:58.199907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1448 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:38:03.622 [2024-12-06 18:49:58.199930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:38:03.622 [2024-12-06 18:49:58.207455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:176 nsid:1 lba:1576 len:8 PRP1 0x200004e40000 PRP2 0x0 00:38:03.622 [2024-12-06 18:49:58.207476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:176 cdw0:0 sqhd:00c8 p:1 m:0 dnr:0 00:38:03.622 [2024-12-06 18:49:58.246630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2352 len:8 PRP1 0x200004e56000 PRP2 0x0 00:38:03.622 [2024-12-06 18:49:58.246658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:38:03.622 [2024-12-06 18:49:58.292706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3392 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:38:03.622 [2024-12-06 18:49:58.292728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:38:03.622 [2024-12-06 18:49:58.298403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:3568 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:38:03.622 [2024-12-06 18:49:58.298424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:00c1 p:0 m:0 dnr:0 00:38:06.921 Initializing NVMe Controllers 00:38:06.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:06.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:06.921 Initialization complete. Launching workers. 00:38:06.921 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8539, failed: 6 00:38:06.921 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1211, failed to submit 7334 00:38:06.921 success 343, unsuccessful 868, failed 0 00:38:06.921 18:50:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:06.921 18:50:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.921 [2024-12-06 18:50:01.511608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1904 len:8 PRP1 0x200004ae8000 PRP2 0x0 00:38:06.921 [2024-12-06 18:50:01.511644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e8 p:0 m:0 dnr:0 00:38:10.218 Initializing NVMe Controllers 00:38:10.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:10.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:10.218 Initialization complete. Launching workers. 00:38:10.218 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43777, failed: 1 00:38:10.218 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2693, failed to submit 41085 00:38:10.218 success 595, unsuccessful 2098, failed 0 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.218 18:50:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2456980 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2456980 ']' 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2456980 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:11.618 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456980 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456980' 00:38:11.879 killing process with pid 2456980 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2456980 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2456980 00:38:11.879 00:38:11.879 real 0m12.177s 00:38:11.879 user 0m49.600s 00:38:11.879 sys 0m2.058s 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:11.879 ************************************ 00:38:11.879 END TEST spdk_target_abort 00:38:11.879 ************************************ 00:38:11.879 18:50:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:11.879 18:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:11.879 18:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.879 18:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:11.879 ************************************ 00:38:11.879 START TEST kernel_target_abort 00:38:11.879 ************************************ 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:11.879 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:12.140 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:12.140 18:50:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:15.438 Waiting for block devices as requested 00:38:15.438 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:15.438 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:15.438 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:15.700 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:15.700 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:15.700 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:15.960 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:15.960 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:15.960 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:16.222 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:16.222 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:16.482 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:16.482 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:16.482 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:16.743 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:16.743 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:16.743 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:17.004 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:17.266 No valid GPT data, bailing 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:17.266 00:38:17.266 Discovery Log Number of Records 2, Generation counter 2 00:38:17.266 =====Discovery Log Entry 0====== 00:38:17.266 trtype: tcp 00:38:17.266 adrfam: ipv4 00:38:17.266 subtype: current discovery subsystem 00:38:17.266 treq: not specified, sq flow control disable supported 00:38:17.266 portid: 1 00:38:17.266 trsvcid: 4420 00:38:17.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:17.266 traddr: 10.0.0.1 00:38:17.266 eflags: none 00:38:17.266 sectype: none 00:38:17.266 =====Discovery Log Entry 1====== 00:38:17.266 trtype: tcp 00:38:17.266 adrfam: ipv4 00:38:17.266 subtype: nvme subsystem 00:38:17.266 treq: not specified, sq flow control disable supported 00:38:17.266 portid: 1 00:38:17.266 trsvcid: 4420 00:38:17.266 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:17.266 traddr: 10.0.0.1 00:38:17.266 eflags: none 00:38:17.266 sectype: none 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:17.266 18:50:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:20.569 Initializing NVMe Controllers 00:38:20.569 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:20.569 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:20.569 Initialization complete. Launching workers. 00:38:20.569 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67944, failed: 0 00:38:20.569 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67944, failed to submit 0 00:38:20.569 success 0, unsuccessful 67944, failed 0 00:38:20.569 18:50:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:20.569 18:50:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:23.865 Initializing NVMe Controllers 00:38:23.865 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:23.865 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:23.865 Initialization complete. Launching workers. 00:38:23.865 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119625, failed: 0 00:38:23.865 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30110, failed to submit 89515 00:38:23.865 success 0, unsuccessful 30110, failed 0 00:38:23.865 18:50:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:23.865 18:50:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:27.162 Initializing NVMe Controllers 00:38:27.162 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:27.162 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:27.162 Initialization complete. Launching workers. 00:38:27.162 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146479, failed: 0 00:38:27.162 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36670, failed to submit 109809 00:38:27.162 success 0, unsuccessful 36670, failed 0 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:27.162 18:50:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:30.557 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:30.557 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:31.938 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:32.509 00:38:32.509 real 0m20.355s 00:38:32.509 user 0m9.893s 00:38:32.509 sys 0m6.102s 00:38:32.509 18:50:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:32.509 18:50:26 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:32.509 ************************************ 00:38:32.509 END TEST kernel_target_abort 00:38:32.509 ************************************ 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:32.509 rmmod nvme_tcp 00:38:32.509 rmmod nvme_fabrics 00:38:32.509 rmmod nvme_keyring 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2456980 ']' 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2456980 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2456980 ']' 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2456980 00:38:32.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2456980) - No such process 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2456980 is not found' 00:38:32.509 Process with pid 2456980 is not found 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:32.509 18:50:27 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:35.806 Waiting for block devices as requested 00:38:35.806 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:35.806 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:36.067 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:36.067 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:36.067 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:36.328 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:36.328 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:36.328 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:36.328 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:36.589 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:36.850 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:36.850 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:36.850 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:37.111 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:37.111 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:37.111 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:37.111 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:37.683 18:50:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.597 18:50:34 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:39.597 00:38:39.597 real 0m52.617s 00:38:39.597 user 1m4.913s 00:38:39.597 sys 0m19.489s 00:38:39.597 18:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.597 18:50:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:39.597 ************************************ 00:38:39.597 END TEST nvmf_abort_qd_sizes 00:38:39.597 ************************************ 00:38:39.597 18:50:34 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:39.597 18:50:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:39.597 18:50:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.597 18:50:34 -- common/autotest_common.sh@10 -- # set +x 00:38:39.597 ************************************ 00:38:39.597 START TEST keyring_file 00:38:39.597 ************************************ 00:38:39.597 18:50:34 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:39.858 * Looking for test storage... 00:38:39.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:39.858 18:50:34 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:39.858 18:50:34 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:38:39.858 18:50:34 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:39.858 18:50:34 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:39.858 18:50:34 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:39.858 18:50:34 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:39.858 18:50:34 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:39.859 18:50:34 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:39.859 18:50:34 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:39.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.859 --rc genhtml_branch_coverage=1 00:38:39.859 --rc genhtml_function_coverage=1 00:38:39.859 --rc genhtml_legend=1 00:38:39.859 --rc geninfo_all_blocks=1 00:38:39.859 --rc geninfo_unexecuted_blocks=1 00:38:39.859 00:38:39.859 ' 00:38:39.859 18:50:34 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:39.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.859 --rc genhtml_branch_coverage=1 00:38:39.859 --rc genhtml_function_coverage=1 00:38:39.859 --rc genhtml_legend=1 00:38:39.859 --rc geninfo_all_blocks=1 00:38:39.859 --rc geninfo_unexecuted_blocks=1 00:38:39.859 00:38:39.859 ' 00:38:39.859 18:50:34 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:39.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.859 --rc genhtml_branch_coverage=1 00:38:39.859 --rc genhtml_function_coverage=1 00:38:39.859 --rc genhtml_legend=1 00:38:39.859 --rc geninfo_all_blocks=1 00:38:39.859 --rc geninfo_unexecuted_blocks=1 00:38:39.859 00:38:39.859 ' 00:38:39.859 18:50:34 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:39.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:39.859 --rc genhtml_branch_coverage=1 00:38:39.859 --rc genhtml_function_coverage=1 00:38:39.859 --rc genhtml_legend=1 00:38:39.859 --rc geninfo_all_blocks=1 00:38:39.859 --rc geninfo_unexecuted_blocks=1 00:38:39.859 00:38:39.859 ' 00:38:39.859 18:50:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:39.859 18:50:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:39.859 18:50:34 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:39.859 18:50:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.859 18:50:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.859 18:50:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.859 18:50:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:39.859 18:50:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:39.859 18:50:34 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:39.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:39.860 18:50:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Lk85ysYreE 00:38:39.860 18:50:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:39.860 18:50:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Lk85ysYreE 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Lk85ysYreE 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Lk85ysYreE 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wz9wrZbRMJ 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:40.121 18:50:34 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wz9wrZbRMJ 00:38:40.121 18:50:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wz9wrZbRMJ 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.wz9wrZbRMJ 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=2467194 00:38:40.121 18:50:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2467194 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2467194 ']' 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:40.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:40.121 18:50:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:40.121 [2024-12-06 18:50:34.742756] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:38:40.121 [2024-12-06 18:50:34.742812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467194 ] 00:38:40.121 [2024-12-06 18:50:34.828357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.121 [2024-12-06 18:50:34.865437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:41.062 18:50:35 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.062 [2024-12-06 18:50:35.549061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:41.062 null0 00:38:41.062 [2024-12-06 18:50:35.581101] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:41.062 [2024-12-06 18:50:35.581389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:41.062 18:50:35 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.062 [2024-12-06 18:50:35.613182] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:41.062 request: 00:38:41.062 { 00:38:41.062 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.062 "secure_channel": false, 00:38:41.062 "listen_address": { 00:38:41.062 "trtype": "tcp", 00:38:41.062 "traddr": "127.0.0.1", 00:38:41.062 "trsvcid": "4420" 00:38:41.062 }, 00:38:41.062 "method": "nvmf_subsystem_add_listener", 00:38:41.062 "req_id": 1 00:38:41.062 } 00:38:41.062 Got JSON-RPC error response 00:38:41.062 response: 00:38:41.062 { 00:38:41.062 "code": -32602, 00:38:41.062 "message": "Invalid parameters" 00:38:41.062 } 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.062 18:50:35 keyring_file -- keyring/file.sh@47 -- # bperfpid=2467215 00:38:41.062 18:50:35 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2467215 /var/tmp/bperf.sock 00:38:41.062 18:50:35 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2467215 ']' 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:41.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:41.062 18:50:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:41.062 [2024-12-06 18:50:35.670199] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:38:41.062 [2024-12-06 18:50:35.670247] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2467215 ] 00:38:41.062 [2024-12-06 18:50:35.757859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:41.062 [2024-12-06 18:50:35.794717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:42.006 18:50:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:42.006 18:50:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:42.006 18:50:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:42.006 18:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:42.006 18:50:36 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wz9wrZbRMJ 00:38:42.006 18:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wz9wrZbRMJ 00:38:42.268 18:50:36 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:42.268 18:50:36 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:42.268 18:50:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.268 18:50:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.268 18:50:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.529 18:50:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Lk85ysYreE == \/\t\m\p\/\t\m\p\.\L\k\8\5\y\s\Y\r\e\E ]] 00:38:42.529 18:50:37 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:42.529 18:50:37 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:42.529 18:50:37 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.wz9wrZbRMJ == \/\t\m\p\/\t\m\p\.\w\z\9\w\r\Z\b\R\M\J ]] 00:38:42.529 18:50:37 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:42.529 18:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.790 18:50:37 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:42.790 18:50:37 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:42.790 18:50:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:42.790 18:50:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:42.790 18:50:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:42.790 18:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:42.790 18:50:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:43.051 18:50:37 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:43.051 18:50:37 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.051 18:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:43.051 [2024-12-06 18:50:37.810174] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:43.312 nvme0n1 00:38:43.312 18:50:37 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:43.312 18:50:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:43.312 18:50:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.312 18:50:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.312 18:50:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:43.312 18:50:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.573 18:50:38 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:43.573 18:50:38 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:43.573 18:50:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:43.573 18:50:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:43.573 18:50:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:43.573 18:50:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:43.573 18:50:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:43.573 18:50:38 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:43.573 18:50:38 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:43.832 Running I/O for 1 seconds... 00:38:44.773 20135.00 IOPS, 78.65 MiB/s 00:38:44.773 Latency(us) 00:38:44.773 [2024-12-06T17:50:39.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.773 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:44.773 nvme0n1 : 1.00 20183.74 78.84 0.00 0.00 6330.07 3877.55 17694.72 00:38:44.773 [2024-12-06T17:50:39.557Z] =================================================================================================================== 00:38:44.773 [2024-12-06T17:50:39.557Z] Total : 20183.74 78.84 0.00 0.00 6330.07 3877.55 17694.72 00:38:44.773 { 00:38:44.773 "results": [ 00:38:44.773 { 00:38:44.773 "job": "nvme0n1", 00:38:44.773 "core_mask": "0x2", 00:38:44.773 "workload": "randrw", 00:38:44.773 "percentage": 50, 00:38:44.773 "status": "finished", 00:38:44.773 "queue_depth": 128, 00:38:44.773 "io_size": 4096, 00:38:44.773 "runtime": 1.003927, 00:38:44.773 "iops": 20183.738459071228, 00:38:44.773 "mibps": 78.84272835574698, 00:38:44.773 "io_failed": 0, 00:38:44.773 "io_timeout": 0, 00:38:44.773 "avg_latency_us": 6330.069537580813, 00:38:44.773 "min_latency_us": 3877.5466666666666, 00:38:44.773 "max_latency_us": 17694.72 00:38:44.773 } 00:38:44.773 ], 00:38:44.773 "core_count": 1 00:38:44.773 } 00:38:44.773 18:50:39 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:44.773 18:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:45.035 18:50:39 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.035 18:50:39 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:45.035 18:50:39 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:45.035 18:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.297 18:50:39 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:45.297 18:50:39 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:45.297 18:50:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:45.297 18:50:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:45.558 [2024-12-06 18:50:40.123839] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:45.558 [2024-12-06 18:50:40.123859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e5870 (107): Transport endpoint is not connected 00:38:45.558 [2024-12-06 18:50:40.124853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e5870 (9): Bad file descriptor 00:38:45.558 [2024-12-06 18:50:40.125854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:45.558 [2024-12-06 18:50:40.125863] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:45.558 [2024-12-06 18:50:40.125869] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:45.558 [2024-12-06 18:50:40.125880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:45.558 request: 00:38:45.558 { 00:38:45.558 "name": "nvme0", 00:38:45.558 "trtype": "tcp", 00:38:45.558 "traddr": "127.0.0.1", 00:38:45.558 "adrfam": "ipv4", 00:38:45.558 "trsvcid": "4420", 00:38:45.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:45.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:45.558 "prchk_reftag": false, 00:38:45.558 "prchk_guard": false, 00:38:45.558 "hdgst": false, 00:38:45.558 "ddgst": false, 00:38:45.558 "psk": "key1", 00:38:45.558 "allow_unrecognized_csi": false, 00:38:45.558 "method": "bdev_nvme_attach_controller", 00:38:45.558 "req_id": 1 00:38:45.558 } 00:38:45.558 Got JSON-RPC error response 00:38:45.558 response: 00:38:45.558 { 00:38:45.558 "code": -5, 00:38:45.558 "message": "Input/output error" 00:38:45.558 } 00:38:45.558 18:50:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:45.558 18:50:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:45.558 18:50:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:45.558 18:50:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:45.558 18:50:40 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:45.558 18:50:40 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:45.558 18:50:40 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:45.558 18:50:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:45.818 18:50:40 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:45.818 18:50:40 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:45.818 18:50:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:46.079 18:50:40 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:46.079 18:50:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:46.339 18:50:40 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:46.339 18:50:40 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:46.339 18:50:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.339 18:50:41 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:46.339 18:50:41 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Lk85ysYreE 00:38:46.339 18:50:41 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.339 18:50:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:46.339 18:50:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.340 18:50:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:46.340 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.340 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:46.340 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.340 18:50:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.340 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.599 [2024-12-06 18:50:41.195555] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Lk85ysYreE': 0100660 00:38:46.599 [2024-12-06 18:50:41.195579] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:46.599 request: 00:38:46.599 { 00:38:46.599 "name": "key0", 00:38:46.599 "path": "/tmp/tmp.Lk85ysYreE", 00:38:46.599 "method": "keyring_file_add_key", 00:38:46.599 "req_id": 1 00:38:46.599 } 00:38:46.599 Got JSON-RPC error response 00:38:46.599 response: 00:38:46.599 { 00:38:46.599 "code": -1, 00:38:46.599 "message": "Operation not permitted" 00:38:46.599 } 00:38:46.599 18:50:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:46.599 18:50:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:46.599 18:50:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:46.599 18:50:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:46.599 18:50:41 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Lk85ysYreE 00:38:46.599 18:50:41 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.599 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lk85ysYreE 00:38:46.860 18:50:41 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Lk85ysYreE 00:38:46.860 18:50:41 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:46.860 18:50:41 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:46.860 18:50:41 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:46.860 18:50:41 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:46.860 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:47.121 [2024-12-06 18:50:41.720888] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Lk85ysYreE': No such file or directory 00:38:47.121 [2024-12-06 18:50:41.720902] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:47.121 [2024-12-06 18:50:41.720915] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:47.121 [2024-12-06 18:50:41.720920] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:47.121 [2024-12-06 18:50:41.720926] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:47.121 [2024-12-06 18:50:41.720931] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:47.121 request: 00:38:47.121 { 00:38:47.121 "name": "nvme0", 00:38:47.121 "trtype": "tcp", 00:38:47.121 "traddr": "127.0.0.1", 00:38:47.121 "adrfam": "ipv4", 00:38:47.121 "trsvcid": "4420", 00:38:47.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:47.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:47.121 "prchk_reftag": false, 00:38:47.121 "prchk_guard": false, 00:38:47.121 "hdgst": false, 00:38:47.121 "ddgst": false, 00:38:47.121 "psk": "key0", 00:38:47.121 "allow_unrecognized_csi": false, 00:38:47.121 "method": "bdev_nvme_attach_controller", 00:38:47.121 "req_id": 1 00:38:47.121 } 00:38:47.121 Got JSON-RPC error response 00:38:47.121 response: 00:38:47.121 { 00:38:47.121 "code": -19, 00:38:47.121 "message": "No such device" 00:38:47.121 } 00:38:47.121 18:50:41 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:47.121 18:50:41 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:47.121 18:50:41 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:47.121 18:50:41 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:47.121 18:50:41 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:47.121 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:47.381 18:50:41 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UACuymN9xa 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:47.381 18:50:41 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UACuymN9xa 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UACuymN9xa 00:38:47.381 18:50:41 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.UACuymN9xa 00:38:47.381 18:50:41 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UACuymN9xa 00:38:47.381 18:50:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UACuymN9xa 00:38:47.381 18:50:42 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:47.381 18:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:47.642 nvme0n1 00:38:47.642 18:50:42 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:47.642 18:50:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:47.642 18:50:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:47.642 18:50:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.642 18:50:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:47.642 18:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.903 18:50:42 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:47.903 18:50:42 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:47.903 18:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:48.164 18:50:42 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:48.164 18:50:42 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.164 18:50:42 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:48.164 18:50:42 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.164 18:50:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.425 18:50:43 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:48.425 18:50:43 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:48.425 18:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:48.686 18:50:43 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:48.686 18:50:43 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:48.686 18:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.686 18:50:43 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:48.686 18:50:43 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UACuymN9xa 00:38:48.686 18:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UACuymN9xa 00:38:48.946 18:50:43 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wz9wrZbRMJ 00:38:48.946 18:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wz9wrZbRMJ 00:38:49.207 18:50:43 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:49.207 18:50:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:49.467 nvme0n1 00:38:49.467 18:50:44 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:49.467 18:50:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:49.729 18:50:44 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:49.729 "subsystems": [ 00:38:49.729 { 00:38:49.729 "subsystem": "keyring", 00:38:49.729 "config": [ 00:38:49.729 { 00:38:49.729 "method": "keyring_file_add_key", 00:38:49.729 "params": { 00:38:49.729 "name": "key0", 00:38:49.729 "path": "/tmp/tmp.UACuymN9xa" 00:38:49.729 } 00:38:49.729 }, 00:38:49.729 { 00:38:49.729 "method": "keyring_file_add_key", 00:38:49.729 "params": { 00:38:49.729 "name": "key1", 00:38:49.729 "path": "/tmp/tmp.wz9wrZbRMJ" 00:38:49.729 } 00:38:49.729 } 00:38:49.729 ] 00:38:49.729 }, 00:38:49.729 { 00:38:49.729 "subsystem": "iobuf", 00:38:49.729 "config": [ 00:38:49.729 { 00:38:49.729 "method": "iobuf_set_options", 00:38:49.729 "params": { 00:38:49.729 "small_pool_count": 8192, 00:38:49.729 "large_pool_count": 1024, 00:38:49.729 "small_bufsize": 8192, 00:38:49.729 "large_bufsize": 135168, 00:38:49.729 "enable_numa": false 00:38:49.729 } 00:38:49.729 } 00:38:49.729 ] 00:38:49.729 }, 00:38:49.729 { 00:38:49.729 "subsystem": "sock", 00:38:49.729 "config": [ 00:38:49.729 { 00:38:49.729 "method": "sock_set_default_impl", 00:38:49.730 "params": { 00:38:49.730 "impl_name": "posix" 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "sock_impl_set_options", 00:38:49.730 "params": { 00:38:49.730 "impl_name": "ssl", 00:38:49.730 "recv_buf_size": 4096, 00:38:49.730 "send_buf_size": 4096, 00:38:49.730 "enable_recv_pipe": true, 00:38:49.730 "enable_quickack": false, 00:38:49.730 "enable_placement_id": 0, 00:38:49.730 "enable_zerocopy_send_server": true, 00:38:49.730 "enable_zerocopy_send_client": false, 00:38:49.730 "zerocopy_threshold": 0, 00:38:49.730 "tls_version": 0, 00:38:49.730 "enable_ktls": false 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "sock_impl_set_options", 00:38:49.730 "params": { 00:38:49.730 "impl_name": "posix", 00:38:49.730 "recv_buf_size": 2097152, 00:38:49.730 "send_buf_size": 2097152, 00:38:49.730 "enable_recv_pipe": true, 00:38:49.730 "enable_quickack": false, 00:38:49.730 "enable_placement_id": 0, 00:38:49.730 "enable_zerocopy_send_server": true, 00:38:49.730 "enable_zerocopy_send_client": false, 00:38:49.730 "zerocopy_threshold": 0, 00:38:49.730 "tls_version": 0, 00:38:49.730 "enable_ktls": false 00:38:49.730 } 00:38:49.730 } 00:38:49.730 ] 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "subsystem": "vmd", 00:38:49.730 "config": [] 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "subsystem": "accel", 00:38:49.730 "config": [ 00:38:49.730 { 00:38:49.730 "method": "accel_set_options", 00:38:49.730 "params": { 00:38:49.730 "small_cache_size": 128, 00:38:49.730 "large_cache_size": 16, 00:38:49.730 "task_count": 2048, 00:38:49.730 "sequence_count": 2048, 00:38:49.730 "buf_count": 2048 00:38:49.730 } 00:38:49.730 } 00:38:49.730 ] 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "subsystem": "bdev", 00:38:49.730 "config": [ 00:38:49.730 { 00:38:49.730 "method": "bdev_set_options", 00:38:49.730 "params": { 00:38:49.730 "bdev_io_pool_size": 65535, 00:38:49.730 "bdev_io_cache_size": 256, 00:38:49.730 "bdev_auto_examine": true, 00:38:49.730 "iobuf_small_cache_size": 128, 00:38:49.730 "iobuf_large_cache_size": 16 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_raid_set_options", 00:38:49.730 "params": { 00:38:49.730 "process_window_size_kb": 1024, 00:38:49.730 "process_max_bandwidth_mb_sec": 0 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_iscsi_set_options", 00:38:49.730 "params": { 00:38:49.730 "timeout_sec": 30 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_nvme_set_options", 00:38:49.730 "params": { 00:38:49.730 "action_on_timeout": "none", 00:38:49.730 "timeout_us": 0, 00:38:49.730 "timeout_admin_us": 0, 00:38:49.730 "keep_alive_timeout_ms": 10000, 00:38:49.730 "arbitration_burst": 0, 00:38:49.730 "low_priority_weight": 0, 00:38:49.730 "medium_priority_weight": 0, 00:38:49.730 "high_priority_weight": 0, 00:38:49.730 "nvme_adminq_poll_period_us": 10000, 00:38:49.730 "nvme_ioq_poll_period_us": 0, 00:38:49.730 "io_queue_requests": 512, 00:38:49.730 "delay_cmd_submit": true, 00:38:49.730 "transport_retry_count": 4, 00:38:49.730 "bdev_retry_count": 3, 00:38:49.730 "transport_ack_timeout": 0, 00:38:49.730 "ctrlr_loss_timeout_sec": 0, 00:38:49.730 "reconnect_delay_sec": 0, 00:38:49.730 "fast_io_fail_timeout_sec": 0, 00:38:49.730 "disable_auto_failback": false, 00:38:49.730 "generate_uuids": false, 00:38:49.730 "transport_tos": 0, 00:38:49.730 "nvme_error_stat": false, 00:38:49.730 "rdma_srq_size": 0, 00:38:49.730 "io_path_stat": false, 00:38:49.730 "allow_accel_sequence": false, 00:38:49.730 "rdma_max_cq_size": 0, 00:38:49.730 "rdma_cm_event_timeout_ms": 0, 00:38:49.730 "dhchap_digests": [ 00:38:49.730 "sha256", 00:38:49.730 "sha384", 00:38:49.730 "sha512" 00:38:49.730 ], 00:38:49.730 "dhchap_dhgroups": [ 00:38:49.730 "null", 00:38:49.730 "ffdhe2048", 00:38:49.730 "ffdhe3072", 00:38:49.730 "ffdhe4096", 00:38:49.730 "ffdhe6144", 00:38:49.730 "ffdhe8192" 00:38:49.730 ] 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_nvme_attach_controller", 00:38:49.730 "params": { 00:38:49.730 "name": "nvme0", 00:38:49.730 "trtype": "TCP", 00:38:49.730 "adrfam": "IPv4", 00:38:49.730 "traddr": "127.0.0.1", 00:38:49.730 "trsvcid": "4420", 00:38:49.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.730 "prchk_reftag": false, 00:38:49.730 "prchk_guard": false, 00:38:49.730 "ctrlr_loss_timeout_sec": 0, 00:38:49.730 "reconnect_delay_sec": 0, 00:38:49.730 "fast_io_fail_timeout_sec": 0, 00:38:49.730 "psk": "key0", 00:38:49.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.730 "hdgst": false, 00:38:49.730 "ddgst": false, 00:38:49.730 "multipath": "multipath" 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_nvme_set_hotplug", 00:38:49.730 "params": { 00:38:49.730 "period_us": 100000, 00:38:49.730 "enable": false 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "bdev_wait_for_examine" 00:38:49.730 } 00:38:49.730 ] 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "subsystem": "nbd", 00:38:49.730 "config": [] 00:38:49.730 } 00:38:49.730 ] 00:38:49.730 }' 00:38:49.730 18:50:44 keyring_file -- keyring/file.sh@115 -- # killprocess 2467215 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2467215 ']' 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2467215 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467215 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467215' 00:38:49.730 killing process with pid 2467215 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@973 -- # kill 2467215 00:38:49.730 Received shutdown signal, test time was about 1.000000 seconds 00:38:49.730 00:38:49.730 Latency(us) 00:38:49.730 [2024-12-06T17:50:44.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:49.730 [2024-12-06T17:50:44.514Z] =================================================================================================================== 00:38:49.730 [2024-12-06T17:50:44.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@978 -- # wait 2467215 00:38:49.730 18:50:44 keyring_file -- keyring/file.sh@118 -- # bperfpid=2469027 00:38:49.730 18:50:44 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2469027 /var/tmp/bperf.sock 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2469027 ']' 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:49.730 18:50:44 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:49.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:49.730 18:50:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:49.730 18:50:44 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:49.730 "subsystems": [ 00:38:49.730 { 00:38:49.730 "subsystem": "keyring", 00:38:49.730 "config": [ 00:38:49.730 { 00:38:49.730 "method": "keyring_file_add_key", 00:38:49.730 "params": { 00:38:49.730 "name": "key0", 00:38:49.730 "path": "/tmp/tmp.UACuymN9xa" 00:38:49.730 } 00:38:49.730 }, 00:38:49.730 { 00:38:49.730 "method": "keyring_file_add_key", 00:38:49.730 "params": { 00:38:49.730 "name": "key1", 00:38:49.730 "path": "/tmp/tmp.wz9wrZbRMJ" 00:38:49.730 } 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "iobuf", 00:38:49.731 "config": [ 00:38:49.731 { 00:38:49.731 "method": "iobuf_set_options", 00:38:49.731 "params": { 00:38:49.731 "small_pool_count": 8192, 00:38:49.731 "large_pool_count": 1024, 00:38:49.731 "small_bufsize": 8192, 00:38:49.731 "large_bufsize": 135168, 00:38:49.731 "enable_numa": false 00:38:49.731 } 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "sock", 00:38:49.731 "config": [ 00:38:49.731 { 00:38:49.731 "method": "sock_set_default_impl", 00:38:49.731 "params": { 00:38:49.731 "impl_name": "posix" 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "sock_impl_set_options", 00:38:49.731 "params": { 00:38:49.731 "impl_name": "ssl", 00:38:49.731 "recv_buf_size": 4096, 00:38:49.731 "send_buf_size": 4096, 00:38:49.731 "enable_recv_pipe": true, 00:38:49.731 "enable_quickack": false, 00:38:49.731 "enable_placement_id": 0, 00:38:49.731 "enable_zerocopy_send_server": true, 00:38:49.731 "enable_zerocopy_send_client": false, 00:38:49.731 "zerocopy_threshold": 0, 00:38:49.731 "tls_version": 0, 00:38:49.731 "enable_ktls": false 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "sock_impl_set_options", 00:38:49.731 "params": { 00:38:49.731 "impl_name": "posix", 00:38:49.731 "recv_buf_size": 2097152, 00:38:49.731 "send_buf_size": 2097152, 00:38:49.731 "enable_recv_pipe": true, 00:38:49.731 "enable_quickack": false, 00:38:49.731 "enable_placement_id": 0, 00:38:49.731 "enable_zerocopy_send_server": true, 00:38:49.731 "enable_zerocopy_send_client": false, 00:38:49.731 "zerocopy_threshold": 0, 00:38:49.731 "tls_version": 0, 00:38:49.731 "enable_ktls": false 00:38:49.731 } 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "vmd", 00:38:49.731 "config": [] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "accel", 00:38:49.731 "config": [ 00:38:49.731 { 00:38:49.731 "method": "accel_set_options", 00:38:49.731 "params": { 00:38:49.731 "small_cache_size": 128, 00:38:49.731 "large_cache_size": 16, 00:38:49.731 "task_count": 2048, 00:38:49.731 "sequence_count": 2048, 00:38:49.731 "buf_count": 2048 00:38:49.731 } 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "bdev", 00:38:49.731 "config": [ 00:38:49.731 { 00:38:49.731 "method": "bdev_set_options", 00:38:49.731 "params": { 00:38:49.731 "bdev_io_pool_size": 65535, 00:38:49.731 "bdev_io_cache_size": 256, 00:38:49.731 "bdev_auto_examine": true, 00:38:49.731 "iobuf_small_cache_size": 128, 00:38:49.731 "iobuf_large_cache_size": 16 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_raid_set_options", 00:38:49.731 "params": { 00:38:49.731 "process_window_size_kb": 1024, 00:38:49.731 "process_max_bandwidth_mb_sec": 0 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_iscsi_set_options", 00:38:49.731 "params": { 00:38:49.731 "timeout_sec": 30 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_nvme_set_options", 00:38:49.731 "params": { 00:38:49.731 "action_on_timeout": "none", 00:38:49.731 "timeout_us": 0, 00:38:49.731 "timeout_admin_us": 0, 00:38:49.731 "keep_alive_timeout_ms": 10000, 00:38:49.731 "arbitration_burst": 0, 00:38:49.731 "low_priority_weight": 0, 00:38:49.731 "medium_priority_weight": 0, 00:38:49.731 "high_priority_weight": 0, 00:38:49.731 "nvme_adminq_poll_period_us": 10000, 00:38:49.731 "nvme_ioq_poll_period_us": 0, 00:38:49.731 "io_queue_requests": 512, 00:38:49.731 "delay_cmd_submit": true, 00:38:49.731 "transport_retry_count": 4, 00:38:49.731 "bdev_retry_count": 3, 00:38:49.731 "transport_ack_timeout": 0, 00:38:49.731 "ctrlr_loss_timeout_sec": 0, 00:38:49.731 "reconnect_delay_sec": 0, 00:38:49.731 "fast_io_fail_timeout_sec": 0, 00:38:49.731 "disable_auto_failback": false, 00:38:49.731 "generate_uuids": false, 00:38:49.731 "transport_tos": 0, 00:38:49.731 "nvme_error_stat": false, 00:38:49.731 "rdma_srq_size": 0, 00:38:49.731 "io_path_stat": false, 00:38:49.731 "allow_accel_sequence": false, 00:38:49.731 "rdma_max_cq_size": 0, 00:38:49.731 "rdma_cm_event_timeout_ms": 0, 00:38:49.731 "dhchap_digests": [ 00:38:49.731 "sha256", 00:38:49.731 "sha384", 00:38:49.731 "sha512" 00:38:49.731 ], 00:38:49.731 "dhchap_dhgroups": [ 00:38:49.731 "null", 00:38:49.731 "ffdhe2048", 00:38:49.731 "ffdhe3072", 00:38:49.731 "ffdhe4096", 00:38:49.731 "ffdhe6144", 00:38:49.731 "ffdhe8192" 00:38:49.731 ] 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_nvme_attach_controller", 00:38:49.731 "params": { 00:38:49.731 "name": "nvme0", 00:38:49.731 "trtype": "TCP", 00:38:49.731 "adrfam": "IPv4", 00:38:49.731 "traddr": "127.0.0.1", 00:38:49.731 "trsvcid": "4420", 00:38:49.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.731 "prchk_reftag": false, 00:38:49.731 "prchk_guard": false, 00:38:49.731 "ctrlr_loss_timeout_sec": 0, 00:38:49.731 "reconnect_delay_sec": 0, 00:38:49.731 "fast_io_fail_timeout_sec": 0, 00:38:49.731 "psk": "key0", 00:38:49.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:49.731 "hdgst": false, 00:38:49.731 "ddgst": false, 00:38:49.731 "multipath": "multipath" 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_nvme_set_hotplug", 00:38:49.731 "params": { 00:38:49.731 "period_us": 100000, 00:38:49.731 "enable": false 00:38:49.731 } 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "method": "bdev_wait_for_examine" 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }, 00:38:49.731 { 00:38:49.731 "subsystem": "nbd", 00:38:49.731 "config": [] 00:38:49.731 } 00:38:49.731 ] 00:38:49.731 }' 00:38:49.731 [2024-12-06 18:50:44.494148] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:38:49.731 [2024-12-06 18:50:44.494207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469027 ] 00:38:49.991 [2024-12-06 18:50:44.575533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:49.991 [2024-12-06 18:50:44.604508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:49.991 [2024-12-06 18:50:44.748408] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:50.561 18:50:45 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:50.561 18:50:45 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:50.561 18:50:45 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:50.561 18:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.561 18:50:45 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:50.821 18:50:45 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:50.821 18:50:45 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:50.821 18:50:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:50.821 18:50:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.821 18:50:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.821 18:50:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.821 18:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.080 18:50:45 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:51.080 18:50:45 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.080 18:50:45 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:51.080 18:50:45 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:51.080 18:50:45 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:51.080 18:50:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:51.340 18:50:45 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:51.340 18:50:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:51.340 18:50:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UACuymN9xa /tmp/tmp.wz9wrZbRMJ 00:38:51.340 18:50:45 keyring_file -- keyring/file.sh@20 -- # killprocess 2469027 00:38:51.340 18:50:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2469027 ']' 00:38:51.340 18:50:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2469027 00:38:51.340 18:50:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469027 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469027' 00:38:51.340 killing process with pid 2469027 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@973 -- # kill 2469027 00:38:51.340 Received shutdown signal, test time was about 1.000000 seconds 00:38:51.340 00:38:51.340 Latency(us) 00:38:51.340 [2024-12-06T17:50:46.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:51.340 [2024-12-06T17:50:46.124Z] =================================================================================================================== 00:38:51.340 [2024-12-06T17:50:46.124Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:51.340 18:50:46 keyring_file -- common/autotest_common.sh@978 -- # wait 2469027 00:38:51.599 18:50:46 keyring_file -- keyring/file.sh@21 -- # killprocess 2467194 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2467194 ']' 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2467194 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2467194 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2467194' 00:38:51.599 killing process with pid 2467194 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@973 -- # kill 2467194 00:38:51.599 18:50:46 keyring_file -- common/autotest_common.sh@978 -- # wait 2467194 00:38:51.859 00:38:51.859 real 0m12.057s 00:38:51.859 user 0m29.227s 00:38:51.859 sys 0m2.655s 00:38:51.859 18:50:46 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.859 18:50:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:51.859 ************************************ 00:38:51.859 END TEST keyring_file 00:38:51.859 ************************************ 00:38:51.859 18:50:46 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:51.859 18:50:46 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:51.859 18:50:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:51.859 18:50:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.859 18:50:46 -- common/autotest_common.sh@10 -- # set +x 00:38:51.859 ************************************ 00:38:51.859 START TEST keyring_linux 00:38:51.859 ************************************ 00:38:51.859 18:50:46 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:51.859 Joined session keyring: 394383976 00:38:51.859 * Looking for test storage... 00:38:51.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:51.859 18:50:46 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:51.859 18:50:46 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:38:51.859 18:50:46 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.120 --rc genhtml_branch_coverage=1 00:38:52.120 --rc genhtml_function_coverage=1 00:38:52.120 --rc genhtml_legend=1 00:38:52.120 --rc geninfo_all_blocks=1 00:38:52.120 --rc geninfo_unexecuted_blocks=1 00:38:52.120 00:38:52.120 ' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.120 --rc genhtml_branch_coverage=1 00:38:52.120 --rc genhtml_function_coverage=1 00:38:52.120 --rc genhtml_legend=1 00:38:52.120 --rc geninfo_all_blocks=1 00:38:52.120 --rc geninfo_unexecuted_blocks=1 00:38:52.120 00:38:52.120 ' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.120 --rc genhtml_branch_coverage=1 00:38:52.120 --rc genhtml_function_coverage=1 00:38:52.120 --rc genhtml_legend=1 00:38:52.120 --rc geninfo_all_blocks=1 00:38:52.120 --rc geninfo_unexecuted_blocks=1 00:38:52.120 00:38:52.120 ' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:52.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:52.120 --rc genhtml_branch_coverage=1 00:38:52.120 --rc genhtml_function_coverage=1 00:38:52.120 --rc genhtml_legend=1 00:38:52.120 --rc geninfo_all_blocks=1 00:38:52.120 --rc geninfo_unexecuted_blocks=1 00:38:52.120 00:38:52.120 ' 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:52.120 18:50:46 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:52.120 18:50:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.120 18:50:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.120 18:50:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.120 18:50:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:52.120 18:50:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:52.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:52.120 /tmp/:spdk-test:key0 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:52.120 18:50:46 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:52.120 18:50:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:52.120 /tmp/:spdk-test:key1 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2469484 00:38:52.120 18:50:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2469484 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2469484 ']' 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:52.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.120 18:50:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.120 [2024-12-06 18:50:46.865916] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:38:52.120 [2024-12-06 18:50:46.865988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469484 ] 00:38:52.380 [2024-12-06 18:50:46.928125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.380 [2024-12-06 18:50:46.959620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.380 18:50:47 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:52.380 18:50:47 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:52.380 18:50:47 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:52.380 18:50:47 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:52.380 18:50:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.380 [2024-12-06 18:50:47.147479] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.640 null0 00:38:52.640 [2024-12-06 18:50:47.179525] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:52.640 [2024-12-06 18:50:47.179906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:52.640 18:50:47 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:52.640 18:50:47 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:52.640 532905219 00:38:52.640 18:50:47 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:52.640 336142260 00:38:52.640 18:50:47 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2469638 00:38:52.640 18:50:47 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2469638 /var/tmp/bperf.sock 00:38:52.640 18:50:47 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:52.640 18:50:47 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2469638 ']' 00:38:52.640 18:50:47 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:52.640 18:50:47 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:52.640 18:50:47 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:52.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:52.641 18:50:47 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:52.641 18:50:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:52.641 [2024-12-06 18:50:47.255577] Starting SPDK v25.01-pre git sha1 c2471e450 / DPDK 24.03.0 initialization... 00:38:52.641 [2024-12-06 18:50:47.255625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2469638 ] 00:38:52.641 [2024-12-06 18:50:47.338509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.641 [2024-12-06 18:50:47.368359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.580 18:50:48 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:53.580 18:50:48 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:53.580 18:50:48 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:53.580 18:50:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:53.580 18:50:48 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:53.580 18:50:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:53.841 18:50:48 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:53.841 18:50:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:53.841 [2024-12-06 18:50:48.597488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:54.102 nvme0n1 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:54.102 18:50:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:54.102 18:50:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:54.362 18:50:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:54.362 18:50:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:54.362 18:50:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:54.362 18:50:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:54.362 18:50:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@25 -- # sn=532905219 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@26 -- # [[ 532905219 == \5\3\2\9\0\5\2\1\9 ]] 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 532905219 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:54.362 18:50:49 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:54.362 Running I/O for 1 seconds... 00:38:55.743 24267.00 IOPS, 94.79 MiB/s 00:38:55.743 Latency(us) 00:38:55.743 [2024-12-06T17:50:50.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.743 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:55.743 nvme0n1 : 1.01 24268.20 94.80 0.00 0.00 5258.68 4041.39 14090.24 00:38:55.743 [2024-12-06T17:50:50.527Z] =================================================================================================================== 00:38:55.743 [2024-12-06T17:50:50.527Z] Total : 24268.20 94.80 0.00 0.00 5258.68 4041.39 14090.24 00:38:55.743 { 00:38:55.743 "results": [ 00:38:55.743 { 00:38:55.743 "job": "nvme0n1", 00:38:55.743 "core_mask": "0x2", 00:38:55.743 "workload": "randread", 00:38:55.743 "status": "finished", 00:38:55.743 "queue_depth": 128, 00:38:55.743 "io_size": 4096, 00:38:55.743 "runtime": 1.005266, 00:38:55.743 "iops": 24268.203639633688, 00:38:55.743 "mibps": 94.7976704673191, 00:38:55.743 "io_failed": 0, 00:38:55.743 "io_timeout": 0, 00:38:55.743 "avg_latency_us": 5258.6768104060775, 00:38:55.743 "min_latency_us": 4041.3866666666668, 00:38:55.743 "max_latency_us": 14090.24 00:38:55.743 } 00:38:55.743 ], 00:38:55.743 "core_count": 1 00:38:55.743 } 00:38:55.743 18:50:50 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:55.743 18:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:55.743 18:50:50 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:55.743 18:50:50 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:55.743 18:50:50 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:55.743 18:50:50 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:55.744 18:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:55.744 18:50:50 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:55.744 18:50:50 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:55.744 18:50:50 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:55.744 18:50:50 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:55.744 18:50:50 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:56.005 18:50:50 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:56.005 [2024-12-06 18:50:50.688833] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:56.005 [2024-12-06 18:50:50.689345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f4a0 (107): Transport endpoint is not connected 00:38:56.005 [2024-12-06 18:50:50.690340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249f4a0 (9): Bad file descriptor 00:38:56.005 [2024-12-06 18:50:50.691342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:56.005 [2024-12-06 18:50:50.691349] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:56.005 [2024-12-06 18:50:50.691355] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:56.005 [2024-12-06 18:50:50.691361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:56.005 request: 00:38:56.005 { 00:38:56.005 "name": "nvme0", 00:38:56.005 "trtype": "tcp", 00:38:56.005 "traddr": "127.0.0.1", 00:38:56.005 "adrfam": "ipv4", 00:38:56.005 "trsvcid": "4420", 00:38:56.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.005 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.005 "prchk_reftag": false, 00:38:56.005 "prchk_guard": false, 00:38:56.005 "hdgst": false, 00:38:56.005 "ddgst": false, 00:38:56.005 "psk": ":spdk-test:key1", 00:38:56.005 "allow_unrecognized_csi": false, 00:38:56.005 "method": "bdev_nvme_attach_controller", 00:38:56.005 "req_id": 1 00:38:56.005 } 00:38:56.005 Got JSON-RPC error response 00:38:56.005 response: 00:38:56.005 { 00:38:56.005 "code": -5, 00:38:56.005 "message": "Input/output error" 00:38:56.005 } 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@33 -- # sn=532905219 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 532905219 00:38:56.005 1 links removed 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@33 -- # sn=336142260 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 336142260 00:38:56.005 1 links removed 00:38:56.005 18:50:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2469638 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2469638 ']' 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2469638 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.005 18:50:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469638 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469638' 00:38:56.268 killing process with pid 2469638 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 2469638 00:38:56.268 Received shutdown signal, test time was about 1.000000 seconds 00:38:56.268 00:38:56.268 Latency(us) 00:38:56.268 [2024-12-06T17:50:51.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.268 [2024-12-06T17:50:51.052Z] =================================================================================================================== 00:38:56.268 [2024-12-06T17:50:51.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 2469638 00:38:56.268 18:50:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2469484 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2469484 ']' 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2469484 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2469484 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2469484' 00:38:56.268 killing process with pid 2469484 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@973 -- # kill 2469484 00:38:56.268 18:50:50 keyring_linux -- common/autotest_common.sh@978 -- # wait 2469484 00:38:56.589 00:38:56.589 real 0m4.647s 00:38:56.589 user 0m9.049s 00:38:56.589 sys 0m1.392s 00:38:56.589 18:50:51 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.589 18:50:51 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:56.589 ************************************ 00:38:56.589 END TEST keyring_linux 00:38:56.589 ************************************ 00:38:56.589 18:50:51 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:56.589 18:50:51 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:56.589 18:50:51 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:56.589 18:50:51 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:56.589 18:50:51 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:56.589 18:50:51 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:56.589 18:50:51 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:56.589 18:50:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:56.589 18:50:51 -- common/autotest_common.sh@10 -- # set +x 00:38:56.589 18:50:51 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:56.589 18:50:51 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:56.589 18:50:51 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:56.589 18:50:51 -- common/autotest_common.sh@10 -- # set +x 00:39:04.726 INFO: APP EXITING 00:39:04.726 INFO: killing all VMs 00:39:04.726 INFO: killing vhost app 00:39:04.726 INFO: EXIT DONE 00:39:08.028 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:08.028 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:08.028 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:12.235 Cleaning 00:39:12.236 Removing: /var/run/dpdk/spdk0/config 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:12.236 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:12.236 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:12.236 Removing: /var/run/dpdk/spdk1/config 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:12.236 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:12.236 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:12.236 Removing: /var/run/dpdk/spdk2/config 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:12.236 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:12.236 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:12.236 Removing: /var/run/dpdk/spdk3/config 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:12.236 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:12.236 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:12.236 Removing: /var/run/dpdk/spdk4/config 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:12.236 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:12.236 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:12.236 Removing: /dev/shm/bdev_svc_trace.1 00:39:12.236 Removing: /dev/shm/nvmf_trace.0 00:39:12.236 Removing: /dev/shm/spdk_tgt_trace.pid1892520 00:39:12.236 Removing: /var/run/dpdk/spdk0 00:39:12.236 Removing: /var/run/dpdk/spdk1 00:39:12.236 Removing: /var/run/dpdk/spdk2 00:39:12.236 Removing: /var/run/dpdk/spdk3 00:39:12.236 Removing: /var/run/dpdk/spdk4 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1891026 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1892520 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1893370 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1894409 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1894750 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1895815 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1895972 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1896287 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1897424 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1898128 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1898488 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1898814 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1899156 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1899503 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1899857 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1900213 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1900532 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1901667 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1905076 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1905443 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1905793 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1906001 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1906511 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1906712 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1907094 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1907418 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1907718 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1907803 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1908161 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1908183 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1908729 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1908978 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1909384 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1914077 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1919874 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1932074 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1932849 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1938018 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1938373 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1943672 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1950835 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1953956 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1966487 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1978100 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1980115 00:39:12.236 Removing: /var/run/dpdk/spdk_pid1981177 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2002134 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2006920 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2063747 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2070135 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2077302 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2085516 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2085592 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2086613 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2087637 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2088682 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2089277 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2089417 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2089623 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2089806 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2089808 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2090812 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2091816 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2092822 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2093498 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2093501 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2093833 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2095272 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2096674 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2106333 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2141126 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2146527 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2148523 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2150795 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2150979 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2151234 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2151576 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2152300 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2154489 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2155720 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2156366 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2158954 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2159775 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2160668 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2166177 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2172575 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2172577 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2172579 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2177297 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2187514 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2192349 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2199763 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2201383 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2202906 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2204551 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2210133 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2215686 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2221023 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2230218 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2230294 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2235353 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2235668 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2235739 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2236356 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2236367 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2241770 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2242574 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2247934 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2251105 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2257811 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2264350 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2275083 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2283500 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2283508 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2306523 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2307354 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2308040 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2308726 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2309700 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2310461 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2311144 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2311835 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2316969 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2317220 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2325130 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2325296 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2331801 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2337008 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2348366 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2349033 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2354095 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2354463 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2359485 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2366473 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2369476 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2382187 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2392710 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2394709 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2395717 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2415311 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2420034 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2423229 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2431478 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2431551 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2437442 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2439651 00:39:12.236 Removing: /var/run/dpdk/spdk_pid2442112 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2443343 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2445822 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2447143 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2457197 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2457700 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2458358 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2461298 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2461936 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2462386 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2467194 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2467215 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2469027 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2469484 00:39:12.497 Removing: /var/run/dpdk/spdk_pid2469638 00:39:12.497 Clean 00:39:12.497 18:51:07 -- common/autotest_common.sh@1453 -- # return 0 00:39:12.497 18:51:07 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:12.497 18:51:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.497 18:51:07 -- common/autotest_common.sh@10 -- # set +x 00:39:12.497 18:51:07 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:12.497 18:51:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:12.497 18:51:07 -- common/autotest_common.sh@10 -- # set +x 00:39:12.497 18:51:07 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:12.497 18:51:07 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:12.497 18:51:07 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:12.497 18:51:07 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:12.497 18:51:07 -- spdk/autotest.sh@398 -- # hostname 00:39:12.498 18:51:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:12.759 geninfo: WARNING: invalid characters removed from testname! 00:39:39.337 18:51:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:41.250 18:51:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:43.159 18:51:37 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:44.538 18:51:39 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:46.451 18:51:40 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.830 18:51:42 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:49.739 18:51:44 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:49.739 18:51:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:49.739 18:51:44 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:49.739 18:51:44 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:49.739 18:51:44 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:49.739 18:51:44 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:49.739 + [[ -n 1805601 ]] 00:39:49.739 + sudo kill 1805601 00:39:49.749 [Pipeline] } 00:39:49.765 [Pipeline] // stage 00:39:49.770 [Pipeline] } 00:39:49.785 [Pipeline] // timeout 00:39:49.790 [Pipeline] } 00:39:49.804 [Pipeline] // catchError 00:39:49.809 [Pipeline] } 00:39:49.823 [Pipeline] // wrap 00:39:49.830 [Pipeline] } 00:39:49.843 [Pipeline] // catchError 00:39:49.852 [Pipeline] stage 00:39:49.855 [Pipeline] { (Epilogue) 00:39:49.868 [Pipeline] catchError 00:39:49.869 [Pipeline] { 00:39:49.883 [Pipeline] echo 00:39:49.885 Cleanup processes 00:39:49.891 [Pipeline] sh 00:39:50.183 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:50.183 2483339 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:50.199 [Pipeline] sh 00:39:50.490 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:50.490 ++ grep -v 'sudo pgrep' 00:39:50.490 ++ awk '{print $1}' 00:39:50.490 + sudo kill -9 00:39:50.490 + true 00:39:50.504 [Pipeline] sh 00:39:50.791 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:03.031 [Pipeline] sh 00:40:03.320 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:03.320 Artifacts sizes are good 00:40:03.337 [Pipeline] archiveArtifacts 00:40:03.346 Archiving artifacts 00:40:03.511 [Pipeline] sh 00:40:03.859 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:03.877 [Pipeline] cleanWs 00:40:03.890 [WS-CLEANUP] Deleting project workspace... 00:40:03.890 [WS-CLEANUP] Deferred wipeout is used... 00:40:03.898 [WS-CLEANUP] done 00:40:03.899 [Pipeline] } 00:40:03.918 [Pipeline] // catchError 00:40:03.932 [Pipeline] sh 00:40:04.226 + logger -p user.info -t JENKINS-CI 00:40:04.238 [Pipeline] } 00:40:04.253 [Pipeline] // stage 00:40:04.260 [Pipeline] } 00:40:04.275 [Pipeline] // node 00:40:04.281 [Pipeline] End of Pipeline 00:40:04.324 Finished: SUCCESS